Group No.: 17
Member: ZHU Lei (EID: lzhu68, SID: 55883618)

Summary of this notebook

In Section 1, exploratory data analysis is provided. Specifically, sevaral sample images are shown, per-channel intensity distribution and label distribution are visualized with plotly.

In Section 2, misllaneous utility functions are defined.

In Seciton 3, sevearal solutions are tried, speifically:

  • In Section 3.1, bag of words BoW feature and two machine learning classifiers (logistic regression and RBF kernel SVM) are used. Moreover, two kinds of local descriptor, SIFT and ORB are used for visual word extraction. Visual words are derived by KMeans clustering. Different vocabulary sizes are tried. The best validation accuracy I got in this part is 0.657330.

  • In Section 3.2, I use 3 CNN architectures (MobileNetV2, InceptionResNetV2, VGG16) to extract deep feature, and applied dimenstion reduction (Kernel PCA and NFM) followed with classifier (One of inearSVM, rbfSVM, LR) to do classification. The best combination (MobileNetV2 feature + no dimension reduction + LR) gives validation accuracy 0.862716.

  • In Section 3.3, I finetuned 4 CNN architectures (ResNet101V2, MobileNetV2, InceptionResNetV2, VGG16) end-to-end. The best one (InceptionResNetV2) gives validation accuracy 0.974453.

  • In Section 3.4, I retried finetuning CNN architectures mentioned above after the backgroud of input images is removed with GrabCut algorithm. The best validation score I derived in this section is

In Section 4, several best results are ensembled to get final submission.

Exploratory data analysis

Preparing the ground

Install and import necessary libraries

In [1]:
# !pip install -q efficientnet
# !pip install opencv-python==3.4.2.17
# !pip install opencv-contrib-python==3.4.2.17
# !conda install tensorflow-gpu -y
# !conda install keras=2.3.1 -y
# !conda install pandas -y
# !conda install tqdm -y
# !conda install scikit-learn -y
# !conda install plotly -y
In [1]:
import os
import gc
import re

import cv2
import math
import numpy as np
import scipy as sp
import pandas as pd

import tensorflow as tf
from IPython.display import SVG
import efficientnet.tfkeras as efn
from keras.utils import plot_model
import tensorflow.keras.layers as L
from keras.utils import model_to_dot
import tensorflow.keras.backend as K
from tensorflow.keras.models import Model
# from kaggle_datasets import KaggleDatasets
from tensorflow.keras.applications import InceptionResNetV2

# import seaborn as sns
from tqdm import tqdm
import matplotlib.cm as cm
from sklearn import metrics
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split, GridSearchCV, ParameterGrid

tqdm.pandas()
import plotly.express as px
import plotly.graph_objects as go
import plotly.figure_factory as ff
from plotly.subplots import make_subplots

from collections import OrderedDict


from sklearn.decomposition import PCA, KernelPCA, TruncatedSVD, NMF
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Normalizer, StandardScaler
from sklearn import cluster
from joblib import parallel_backend, Parallel, delayed

# import warnings
# warnings.filterwarnings("ignore")
Using TensorFlow backend.

Load the data and define hyperparameters

In [2]:
EPOCHS = 20
SAMPLE_LEN = 100
IMAGE_PATH = "../input/plant-pathology-2020-fgvc7/images/"
TEST_PATH = "../input/plant-pathology-2020-fgvc7/test.csv"
TRAIN_PATH = "../input/plant-pathology-2020-fgvc7/train.csv"
SUB_PATH = "../input/plant-pathology-2020-fgvc7/sample_submission.csv"

sub = pd.read_csv(SUB_PATH)
test_data = pd.read_csv(TEST_PATH)
train_data = pd.read_csv(TRAIN_PATH)
In [3]:
train_data.head()
Out[3]:
image_id healthy multiple_diseases rust scab
0 Train_0 0 0 0 1
1 Train_1 0 1 0 0
2 Train_2 1 0 0 0
3 Train_3 0 0 1 0
4 Train_4 1 0 0 0
In [4]:
test_data.head()
Out[4]:
image_id
0 Test_0
1 Test_1
2 Test_2
3 Test_3
4 Test_4

Load sample images

In [5]:
def load_image(image_id):
    file_path = image_id + ".jpg"
    image = cv2.imread(IMAGE_PATH + file_path)
    return cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

train_images = train_data["image_id"][:SAMPLE_LEN].progress_apply(load_image)
100%|██████████| 100/100 [00:01<00:00, 78.40it/s]

Visualize one leaf

Sample image

In [7]:
fig = px.imshow(cv2.resize(train_images[0], (205, 136)))
fig.show()

I have plotted the first image in the training data above (the RGB values can be seen by hovering over the image). The green parts of the image have very low blue values, but by contrast, the brown parts have high blue values. This suggests that green (healthy) parts of the image have low blue values, whereas unhealthy parts are more likely to have high blue values. This might suggest that the blue channel may be the key to detecting diseases in plants.

Channel distributions

In [8]:
red_values = [np.mean(train_images[idx][:, :, 0]) for idx in range(len(train_images))]
green_values = [np.mean(train_images[idx][:, :, 1]) for idx in range(len(train_images))]
blue_values = [np.mean(train_images[idx][:, :, 2]) for idx in range(len(train_images))]
values = [np.mean(train_images[idx]) for idx in range(len(train_images))]

All channel values

In [9]:
fig = ff.create_distplot([values], group_labels=["Channels"], colors=["purple"])
fig.update_layout(showlegend=False, template="simple_white")
fig.update_layout(title_text="Distribution of channel values")
fig.data[0].marker.line.color = 'rgb(0, 0, 0)'
fig.data[0].marker.line.width = 0.5
fig

The channel values seem to have a roughly normal distribution centered around 105. The maximum channel activation is 255. This means that the average channel value is less than half the maximum value, which indicates that channels are minimally activated most of the time.

Red channel values

In [10]:
fig = ff.create_distplot([red_values], group_labels=["R"], colors=["red"])
fig.update_layout(showlegend=False, template="simple_white")
fig.update_layout(title_text="Distribution of red channel values")
fig.data[0].marker.line.color = 'rgb(0, 0, 0)'
fig.data[0].marker.line.width = 0.5
fig

The red channel values seem to roughly normal distribution, but with a slight rightward (positive skew). This indicates that the red channel tends to be more concentrated at lower values, at around 100. There is large variation in average red values across images.

Green channel values

In [11]:
fig = ff.create_distplot([green_values], group_labels=["G"], colors=["green"])
fig.update_layout(showlegend=False, template="simple_white")
fig.update_layout(title_text="Distribution of green channel values")
fig.data[0].marker.line.color = 'rgb(0, 0, 0)'
fig.data[0].marker.line.width = 0.5
fig

The green channel values have a more uniform distribution than the red channel values, with a smaller peak. The distribution also has a leftward skew (in contrast to red) and a larger mode of around 140. This indicates that green is more pronounced in these images than red, which makes sense, because these are images of leaves!

Blue channel values

In [12]:
fig = ff.create_distplot([blue_values], group_labels=["B"], colors=["blue"])
fig.update_layout(showlegend=False, template="simple_white")
fig.update_layout(title_text="Distribution of blue channel values")
fig.data[0].marker.line.color = 'rgb(0, 0, 0)'
fig.data[0].marker.line.width = 0.5
fig

The blue channel has the most uniform distribution out of the three color channels, with minimal skew (slight leftward skew). The blue channel shows great variation across images in the dataset.

All channel values (together)

In [13]:
fig = go.Figure()

for idx, values in enumerate([red_values, green_values, blue_values]):
    if idx == 0:
        color = "Red"
    if idx == 1:
        color = "Green"
    if idx == 2:
        color = "Blue"
    fig.add_trace(go.Box(x=[color]*len(values), y=values, name=color, marker=dict(color=color.lower())))
    
fig.update_layout(yaxis_title="Mean value", xaxis_title="Color channel",
                  title="Mean value vs. Color channel", template="plotly_white")
In [14]:
fig = ff.create_distplot([red_values, green_values, blue_values],
                         group_labels=["R", "G", "B"],
                         colors=["red", "green", "blue"])
fig.update_layout(title_text="Distribution of red channel values", template="simple_white")
fig.data[0].marker.line.color = 'rgb(0, 0, 0)'
fig.data[0].marker.line.width = 0.5
fig.data[1].marker.line.color = 'rgb(0, 0, 0)'
fig.data[1].marker.line.width = 0.5
fig.data[2].marker.line.color = 'rgb(0, 0, 0)'
fig.data[2].marker.line.width = 0.5
fig

From the above plots, we can clearly see which colors are more common and which ones less common in the leaf images. Green is the most pronounced color, followed by red and blue respectively. The distributions, when plotted together, appear to have a similar shape, but shifted horizontally.

Visualize sample leaves

Now, I will visualize sample leaves beloning to different categories in the dataset.

In [15]:
def visualize_leaves(cond=[0, 0, 0, 0], cond_cols=["healthy"], is_cond=True):
    if not is_cond:
        cols, rows = 3, min([3, len(train_images)//3])
        fig, ax = plt.subplots(nrows=rows, ncols=cols, figsize=(30, rows*20/3))
        for col in range(cols):
            for row in range(rows):
                ax[row, col].imshow(train_images.loc[train_images.index[-row*3-col-1]])
        return None
        
    cond_0 = "healthy == {}".format(cond[0])
    cond_1 = "scab == {}".format(cond[1])
    cond_2 = "rust == {}".format(cond[2])
    cond_3 = "multiple_diseases == {}".format(cond[3])
    
    cond_list = []
    for col in cond_cols:
        if col == "healthy":
            cond_list.append(cond_0)
        if col == "scab":
            cond_list.append(cond_1)
        if col == "rust":
            cond_list.append(cond_2)
        if col == "multiple_diseases":
            cond_list.append(cond_3)
    
    data = train_data[:100]
#     print(len(data))
    for cond in cond_list:
        data = data.query(cond)
        
#     print(list(data.index))    
    images = train_images.loc[list(data.index)]
    cols, rows = 3, min([3, len(images)//3])
    
    fig, ax = plt.subplots(nrows=rows, ncols=cols, figsize=(30, rows*20/3))
    for col in range(cols):
        for row in range(rows):
            ax[row, col].imshow(images.loc[images.index[row*3+col]])
    plt.show()

Healthy leaves

In [16]:
visualize_leaves(cond=[1, 0, 0, 0], cond_cols=["healthy"])

In the above images, we can see that the healthy leaves are completely green, do not have any brown/yellow spots or scars. Healthy leaves do not have scab or rust.

Leaves with scab

In [17]:
visualize_leaves(cond=[0, 1, 0, 0], cond_cols=["scab"])

In the above images, we can see that leaves with "scab" have large brown marks and stains across the leaf. Scab is defined as "any of various plant diseases caused by fungi or bacteria and resulting in crustlike spots on fruit, leaves, or roots. The spots caused by such a disease". The brown marks across the leaf are a sign of these bacterial/fungal infections. Once diagnosed, scab can be treated using chemical or non-chemical methods.

Leaves with rust

In [18]:
visualize_leaves(cond=[0, 0, 1, 0], cond_cols=["rust"])

In the above images, we can see that leaves with "rust" have several brownish-yellow spots across the leaf. Rust is defined as "a disease, especially of cereals and other grasses, characterized by rust-colored pustules of spores on the affected leaf blades and sheaths and caused by any of several rust fungi". The yellow spots are a sign of infection by a special type of fungi called "rust fungi". Rust can also be treated with several chemical and non-chemical methods once diagnosed.

Leaves with multiple diseases

In [19]:
visualize_leaves(cond=[0, 0, 0, 1], cond_cols=["multiple_diseases"])

In the above images, we can see that the leaves show symptoms for several diseases, including brown marks and yellow spots. These plants have more than one of the above-described diseases.

Visualize label distribution

Now, I will visualize the label distribution of training data using pipe chart.

In [20]:
fig = go.Figure([go.Pie(labels=train_data.columns[1:],
           values=train_data.iloc[:, 1:].sum().values)])
fig.update_layout(title_text="Pie chart of targets", template="simple_white")
fig.data[0].marker.line.color = 'rgb(0, 0, 0)'
fig.data[0].marker.line.width = 0.5
fig.show()

In the pie chart above, we can see that most leaves in the dataset are unhealthy (71.7%). Only 5% of plants have multiple diseases, and "rust" and "scab" occupy approximately one-third of the pie each. In short:

  • number of scab, rust, healthy samples are roughly balanced.
  • multiple_diesases samples are significantly fewer than other types

We may need to apply balanced weight or resampling strategy to handle such imbalance.

Prepare training utilities

device placement and hyperparameters

In [6]:
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
try:
    tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
    print('Running on TPU ', tpu.master())
except ValueError:
    tpu = None

if tpu:
    tf.config.experimental_connect_to_cluster(tpu)
    tf.tpu.experimental.initialize_tpu_system(tpu)
    strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
    strategy = tf.distribute.get_strategy()
    
def seed_everything(seed=0):
    np.random.seed(seed)
    tf.random.set_seed(seed)
    os.environ['PYTHONHASHSEED'] = str(seed)
    os.environ['TF_DETERMINISTIC_OPS'] = '1'

SEED=2048
seed_everything(SEED)

print("REPLICAS: ", strategy.num_replicas_in_sync)
print("GPUs: {}".format(tf.config.experimental.list_physical_devices('GPU')))

# # Data access
# GCS_DS_PATH = KaggleDatasets().get_gcs_path()

# Configuration
AUTO = tf.data.experimental.AUTOTUNE
EPOCHS = 40
BATCH_SIZE = 16 * strategy.num_replicas_in_sync

VALIDATION_SIZE = 0.15
IMAGE_SIZE = 400

# multiprocessing
N_JOBS=-1
REPLICAS:  1
GPUs: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

image path and labels

In [7]:
def format_path(st):
    return IMAGE_PATH + st + '.jpg'

test_paths = test_data.image_id.apply(format_path).values
trainval_paths = train_data.image_id.apply(format_path).values

trainval_labels = np.float32(train_data.loc[:, 'healthy':'scab'].values)

train_paths, valid_paths, train_labels, valid_labels =\
train_test_split(trainval_paths, trainval_labels, test_size=VALIDATION_SIZE,
                 random_state=SEED)

print('train samples: ', len(train_paths))
print('valid samples: ', len(valid_paths))
print('test samples: ', len(test_paths))
print('path example: ', train_paths[0])
print('label example: ',  train_labels[0])
train samples:  1547
valid samples:  274
test samples:  1821
path example:  ../input/plant-pathology-2020-fgvc7/images/Train_96.jpg
label example:  [0. 0. 1. 0.]

image loading

In [8]:
def decode_image(filename, label=None, image_size=(IMAGE_SIZE, IMAGE_SIZE)):
    bits = tf.io.read_file(filename)
    image = tf.image.decode_jpeg(bits, channels=3)
#     image = tf.cast(image, tf.float32) / 255.0
# https://www.tensorflow.org/tutorials/images/transfer_learning
# https://github.com/keras-team/keras-applications/blob/bc89834ed36935ab4a4994446e34ff81c0d8e1b7/keras_applications/imagenet_utils.py#L42
    image = tf.cast(image, tf.float32)
    image = (image/127.5) - 1
    image = tf.image.resize(image, image_size)
    
    if label is None:
        return image
    else:
        return image, label

def data_augment(image, label=None):
    image = tf.image.random_flip_left_right(image)
    image = tf.image.random_flip_up_down(image)
    
    if label is None:
        return image
    else:
        return image, label

training history display

In [9]:
def display_training_curves(training, validation, title, subplot):
    """
    Source: https://www.kaggle.com/mgornergoogle/getting-started-with-100-flowers-on-tpu
    """
    if subplot%10==1: # set up the subplots on the first call
        plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
        plt.tight_layout()
    ax = plt.subplot(subplot)
    ax.set_facecolor('#F8F8F8')
    ax.plot(training)
    ax.plot(validation)
    ax.set_title('model '+ title)
    ax.set_ylabel(title)
    #ax.set_ylim(0.28,1.05)
    ax.set_xlabel('epoch')
    ax.legend(['train', 'valid.'])

get pipeline component by name

In [10]:
def get_backbone(cnn='VGG16'):
    assert cnn in ['ResNet101V2', 'VGG16', 'InceptionResNetV2', 'MobileNetV2']
    if cnn == 'ResNet101V2':
        backbone = tf.keras.applications.ResNet101V2(
            input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3),
            weights='imagenet',
            include_top=False)
    if cnn == 'VGG16':
        backbone = tf.keras.applications.VGG16(
            input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3),
            weights='imagenet',
            include_top=False)
    if cnn == 'InceptionResNetV2':
        backbone = tf.keras.applications.InceptionResNetV2(
            input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3),
            weights='imagenet',
            include_top=False)
    if cnn == 'MobileNetV2':
        backbone = tf.keras.applications.MobileNetV2(
            input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3),
            weights='imagenet',
            include_top=False)     
    return backbone
 

def get_classifier(name='linearSVM'):
    if name == 'linearSVM':
#         return LinearSVC(class_weight='balanced',
#                              probability=True)
        return SVC(kernel='linear',
                   class_weight='balanced',
                   probability=True)
    if name == 'rbfSVM':
        return SVC(kernel='rbf',
                       class_weight='balanced',
                       probability=True)
    if name == 'LR':
        return LogisticRegression()

    
def get_dim_reductor(name='PCA_128'):
    method, n_components = name.split('_')
    n_components = int(n_components)
#     print(method, n_components)
    if method == 'PCA':
        return PCA(n_components=n_components)
#         return KernelPCA(n_components=n_components)
    
    if method == 'KPCA':
        return KernelPCA(kernel='rbf', n_components=n_components)
    
    if method == 'LSA':
        return TruncatedSVD(n_components=n_components,
                            random_state=SEED)
    
    if method == 'NMF':
        return NMF(n_components=n_components)    
    

create dataset object

In [11]:
trainval_dataset = (
    tf.data.Dataset
    .from_tensor_slices((trainval_paths, trainval_labels))
    .map(decode_image, num_parallel_calls=AUTO)
    .cache()
    .map(data_augment, num_parallel_calls=AUTO)
    .shuffle(512)
    .batch(BATCH_SIZE)
    .prefetch(AUTO)
)

train_dataset = (
tf.data.Dataset
    .from_tensor_slices((train_paths, train_labels))
    .map(decode_image, num_parallel_calls=AUTO)
    .cache()
    .map(data_augment, num_parallel_calls=AUTO)
    .repeat()
    .shuffle(512)
    .batch(BATCH_SIZE)
    .prefetch(AUTO)
)

train_dataset_1 = (
tf.data.Dataset
    .from_tensor_slices((train_paths, train_labels))
    .map(decode_image, num_parallel_calls=AUTO)
    .cache()
    .map(data_augment, num_parallel_calls=AUTO)
    .repeat()
    .shuffle(512)
    .batch(64)
    .prefetch(AUTO)
)

valid_dataset = (
    tf.data.Dataset
    .from_tensor_slices((valid_paths, valid_labels))
    .map(decode_image, num_parallel_calls=AUTO)
    .batch(BATCH_SIZE)
    .cache()
    .prefetch(AUTO)
)

test_dataset = (
    tf.data.Dataset
    .from_tensor_slices(test_paths)
    .map(decode_image, num_parallel_calls=AUTO)
    .map(data_augment, num_parallel_calls=AUTO)
    .batch(BATCH_SIZE)
)

create output directories

In [12]:
ckpt_dir = '../output/best_models'
submission_dir = '../output/submissions'
os.makedirs(ckpt_dir, exist_ok=True)
os.makedirs(submission_dir, exist_ok=True)

Train classifiers

BoW feature + ML classifiers

  • local feature: SIFT, ORB
  • image level feature: Bag of Visual Word (BoVW) of different visual vocalbulary size (10, 20, 50, 100, 200, 300, 500)
    • Visual words are derived using KMeans clustering
  • classifiers: LR, rbfSVM
In [28]:
class BoW(object):
    def __init__(self, local_feature='SIFT', vsize=3):
        if local_feature == 'SIFT':
            self.local_feature_extractor = cv2.xfeatures2d.SIFT_create()
        if local_feature == 'SURF':
            self.local_feature_extractor = cv2.xfeatures2d.SURF_create()
        if local_feature == 'ORB':
            self.local_feature_extractor = cv2.ORB_create()
        
        self.vsize = vsize
        self.kmeans = None
    
    def fit(self, im_paths):
#         des_mat_list = [ self.get_local_feature_by_path(im_path)[1] for
#                        im_path in tqdm(im_paths, desc='Extrcting feature points') ]
        
        des_mat_list = Parallel(n_jobs=N_JOBS, backend='threading')\
            (delayed(self.get_local_feature_by_path)(im_path)
             for im_path in tqdm(im_paths, desc='Extrcting feature points')
            )
        
        des_mat_all = np.concatenate(des_mat_list, axis=0)
#         print(f'{len(des_mat_all):d} key points have been extracted!')      
#         print('fitting kmeans to get codebook')
        self.kmeans = cluster.MiniBatchKMeans(n_clusters=self.vsize,
                                         init_size=10*self.vsize,
                                         batch_size=self.vsize,
                                         random_state=SEED)
        self.kmeans.fit(des_mat_all)
#         print('finish building codebook')
    
    def transform(self, im_paths):
#         print('building training feature matrix...')
#         bow_matrix = [ self.get_bow_feature_vec(im_path) for \
#                      im_path in tqdm(im_paths, desc='Extracting BoW feature') ]
        bow_matrix = Parallel(n_jobs=N_JOBS, backend='threading')\
            (delayed(self.get_bow_feature_vec)(im_path)
             for im_path in tqdm(im_paths, desc='Extracting BoW feature')
            )
        bow_matrix = np.stack(bow_matrix)
        return bow_matrix
    
    def fit_transform(self, im_paths):
        self.fit(im_paths)
        return self.transform(im_paths)
        
    def get_local_feature_by_path(self, im_path, ret_kp=False):
        img = cv2.imread(im_path)
        im_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
        kp, des = self.local_feature_extractor.detectAndCompute(im_gray, None)
        
        if ret_kp:
            return kp, des
        
        return des
        
    def get_bow_feature_vec(self, im_path):
        des_mat =  self.get_local_feature_by_path(im_path)
        word_idx = self.kmeans.predict(des_mat)
        hist = np.bincount(word_idx.ravel(), minlength=self.vsize)
        return hist
In [29]:
trainvalY = np.argmax(trainval_labels, 1)
In [30]:
record_ls = []


bow_param_grid = {
#     'local_feature': ['ORB', 'SIFT', 'SURF'],
    'local_feature': ['ORB', 'SIFT'],
    'vsize': [10, 20, 50, 100, 200, 500],
    }

# classifier_names = ['linearSVM', 'rbfSVM', 'LR']
# there is a bug in sklearn that, if use linearSVM + GridSearchCV
# the girdserch will get stuck

classifier_names = ['rbfSVM', 'LR']

# bow_param_grid = {'local_feature': ['ORB'],
#                  'vsize': [100],
#                  }

bow_param_combs = list(ParameterGrid(bow_param_grid))

for comb in bow_param_combs:
    bow = BoW(**comb)
    trainvalXf = bow.fit_transform(trainval_paths)
    testXf = bow.transform(test_paths)
    
    for cls_name in classifier_names: 
        classifier = GridSearchCV(get_classifier(cls_name),
                                   {'C': np.logspace(-4, 4, 20)}, 
                                   scoring='accuracy',
                                   n_jobs=N_JOBS,
                                   verbose=True)
#         classifier = get_classifier(cls_name)
        classifier.fit(trainvalXf, trainvalY)
        
        score = classifier.best_score_

        record = OrderedDict()
        record['local_feature'] = comb['local_feature']
        record['vsize'] = comb['vsize']
        record['classifier'] = cls_name
        record['valid_acc'] = score
        record_ls.append(record)

        probs = classifier.predict_proba(testXf)
    #     print(probs.shape)
        sub.loc[:, 'healthy':] = probs
        sub.to_csv(os.path.join(submission_dir,
                                'BoW-{}-vsize{:d}-{}.csv'\
                                .format(comb['local_feature'],
                                        comb['vsize'],
                                        cls_name
                                       )
                               ),
                   index=False)
        
        print('({}, {}, {}): {:.4f}'.format(comb['local_feature'],
                                            comb['vsize'],
                                            cls_name,
                                            score))
#         del classifier
#         gc.collect()
Extrcting feature points: 100%|██████████| 1821/1821 [00:16<00:00, 110.79it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [00:38<00:00, 46.70it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [00:39<00:00, 46.57it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    3.5s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:   19.9s finished
(ORB, 10, rbfSVM): 0.3740
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  56 tasks      | elapsed:    0.6s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    0.8s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Extrcting feature points:   1%|          | 16/1821 [00:00<00:11, 158.85it/s]
(ORB, 10, LR): 0.4322
Extrcting feature points: 100%|██████████| 1821/1821 [00:16<00:00, 109.78it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [00:25<00:00, 70.07it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [00:26<00:00, 69.96it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    4.1s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:   12.8s finished
(ORB, 20, rbfSVM): 0.4377
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  52 tasks      | elapsed:    0.5s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    0.8s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Extrcting feature points:   0%|          | 0/1821 [00:00<?, ?it/s]
(ORB, 20, LR): 0.4607
Extrcting feature points: 100%|██████████| 1821/1821 [00:16<00:00, 110.97it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [01:31<00:00, 19.98it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [01:31<00:00, 19.86it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    7.5s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:   19.6s finished
(ORB, 50, rbfSVM): 0.4662
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  52 tasks      | elapsed:    0.6s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    0.9s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Extrcting feature points:   0%|          | 0/1821 [00:00<?, ?it/s]
(ORB, 50, LR): 0.4619
Extrcting feature points: 100%|██████████| 1821/1821 [00:16<00:00, 110.10it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [01:26<00:00, 21.14it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [01:25<00:00, 21.18it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   12.6s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:   32.6s finished
(ORB, 100, rbfSVM): 0.4613
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  52 tasks      | elapsed:    0.7s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    1.2s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Extrcting feature points:   0%|          | 0/1821 [00:00<?, ?it/s]
(ORB, 100, LR): 0.4723
Extrcting feature points: 100%|██████████| 1821/1821 [00:16<00:00, 110.85it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [01:24<00:00, 21.67it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [01:24<00:00, 21.56it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   24.6s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:  1.0min finished
(ORB, 200, rbfSVM): 0.4953
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  52 tasks      | elapsed:    1.3s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    2.2s finished
Extrcting feature points:   0%|          | 0/1821 [00:00<?, ?it/s]
(ORB, 200, LR): 0.4811
Extrcting feature points: 100%|██████████| 1821/1821 [00:16<00:00, 110.79it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [01:18<00:00, 23.29it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [01:18<00:00, 23.30it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  1.2min
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:  3.1min finished
(ORB, 500, rbfSVM): 0.5085
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    1.9s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    5.2s finished
Extrcting feature points:   0%|          | 0/1821 [00:00<?, ?it/s]
(ORB, 500, LR): 0.4755
Extrcting feature points: 100%|██████████| 1821/1821 [03:32<00:00,  8.56it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [06:09<00:00,  4.92it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [06:21<00:00,  4.77it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    3.8s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:   21.6s finished
(SIFT, 10, rbfSVM): 0.5228
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  56 tasks      | elapsed:    0.6s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    0.8s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Extrcting feature points:   0%|          | 0/1821 [00:00<?, ?it/s]
(SIFT, 10, LR): 0.5085
Extrcting feature points: 100%|██████████| 1821/1821 [03:32<00:00,  8.57it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [04:21<00:00,  6.98it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [04:16<00:00,  7.11it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    4.9s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:   18.4s finished
(SIFT, 20, rbfSVM): 0.5420
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  56 tasks      | elapsed:    0.7s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    0.8s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Extrcting feature points:   0%|          | 0/1821 [00:00<?, ?it/s]
(SIFT, 20, LR): 0.5420
Extrcting feature points: 100%|██████████| 1821/1821 [03:29<00:00,  8.67it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [07:48<00:00,  3.89it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [07:43<00:00,  3.93it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    8.3s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:   20.6s finished
(SIFT, 50, rbfSVM): 0.5821
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  52 tasks      | elapsed:    0.6s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    0.9s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Extrcting feature points:   0%|          | 0/1821 [00:00<?, ?it/s]
(SIFT, 50, LR): 0.5832
Extrcting feature points: 100%|██████████| 1821/1821 [03:33<00:00,  8.53it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [06:51<00:00,  4.42it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [06:46<00:00,  4.48it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   13.4s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:   29.0s finished
(SIFT, 100, rbfSVM): 0.5936
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  52 tasks      | elapsed:    0.7s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    1.2s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Extrcting feature points:   0%|          | 0/1821 [00:00<?, ?it/s]
(SIFT, 100, LR): 0.6162
Extrcting feature points: 100%|██████████| 1821/1821 [03:35<00:00,  8.44it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [06:42<00:00,  4.53it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [06:37<00:00,  4.58it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   25.4s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:   52.0s finished
(SIFT, 200, rbfSVM): 0.6123
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  52 tasks      | elapsed:    1.5s
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    2.3s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

Extrcting feature points:   0%|          | 0/1821 [00:00<?, ?it/s]
(SIFT, 200, LR): 0.6365
Extrcting feature points: 100%|██████████| 1821/1821 [03:29<00:00,  8.69it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [05:51<00:00,  5.17it/s]
Extracting BoW feature: 100%|██████████| 1821/1821 [06:25<00:00,  4.72it/s]
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  1.2min
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:  2.6min finished
(SIFT, 500, rbfSVM): 0.6502
Fitting 5 folds for each of 20 candidates, totalling 100 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    2.1s
(SIFT, 500, LR): 0.6573
[Parallel(n_jobs=-1)]: Done 100 out of 100 | elapsed:    5.3s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning:

lbfgs failed to converge (status=1):
STOP: TOTAL NO. of ITERATIONS REACHED LIMIT.

Increase the number of iterations (max_iter) or scale the data as shown in:
    https://scikit-learn.org/stable/modules/preprocessing.html
Please also refer to the documentation for alternative solver options:
    https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression

In [31]:
report_df =  pd.DataFrame(record_ls)

with pd.option_context('display.max_rows', None, 'display.max_columns', None): 
    display(report_df)
    

report_df.to_csv(os.path.join('../output', f'BoW_feature_report.csv'), index=False)
local_feature vsize classifier valid_acc
0 ORB 10 rbfSVM 0.373953
1 ORB 10 LR 0.432175
2 ORB 20 rbfSVM 0.437664
3 ORB 20 LR 0.460730
4 ORB 50 rbfSVM 0.466211
5 ORB 50 LR 0.461855
6 ORB 100 rbfSVM 0.461263
7 ORB 100 LR 0.472264
8 ORB 200 rbfSVM 0.495333
9 ORB 200 LR 0.481058
10 ORB 500 rbfSVM 0.508510
11 ORB 500 LR 0.475541
12 SIFT 10 rbfSVM 0.522794
13 SIFT 10 LR 0.508525
14 SIFT 20 rbfSVM 0.542002
15 SIFT 20 LR 0.541995
16 SIFT 50 rbfSVM 0.582095
17 SIFT 50 LR 0.583206
18 SIFT 100 rbfSVM 0.593632
19 SIFT 100 LR 0.616158
20 SIFT 200 rbfSVM 0.612281
21 SIFT 200 LR 0.636462
22 SIFT 500 rbfSVM 0.650169
23 SIFT 500 LR 0.657330

The best score is 0.657330, which is not satisfactory. This may be due to the representation ability of BoW is not enough.

Pretrained CNN feature + ML classifiers

  • Pretrained CNN: MobileNetV2, InceptionResNetV2, VGG16
  • Dimension reduction: NFM, rbfKPCA, none
  • Machine learning classifiers: linearSVM, rbfSVM, LR
In [13]:
# a strange bug: when I use N_JOBS=-1, the fitting stucks

class HybridClassfier(object):
    def __init__(self, cnn='VGG16', classifier='linearSVM', dim_reductor='PCA_128', cache_dir='feature_cache'):
        
#         self.cache_dir = cachedir
#         os.makedir(self.cache_dir, exist_ok=True)
        backbone = get_backbone(cnn)
        self.feature =  tf.keras.Sequential([
            backbone,
            L.GlobalMaxPooling2D()
        ])
    
        if dim_reductor != 'none':
            dim_red = get_dim_reductor(dim_reductor)
            classifier = get_classifier(classifier)
            pipe = Pipeline(steps=[('dimred', dim_red),
#                                    ('normalizer', Normalizer()),
                                   ('cls', classifier)])
            param_grid = {'cls__C': np.logspace(-3, 3, 13)}
            self.classifier = GridSearchCV(pipe,
                                           param_grid,
                                           scoring='accuracy',
                                           n_jobs=N_JOBS,
                                           verbose=True)
        
        else:
            classifier = get_classifier(classifier)
            pipe = Pipeline(steps=[('scaler', StandardScaler()),
                                   ('cls', classifier)])
            param_grid =  {'cls__C': np.logspace(-3,3,13)}
            self.classifier = GridSearchCV(pipe,
                                           param_grid, 
                                           scoring='accuracy',
                                           n_jobs=N_JOBS,
                                           verbose=True)
        
    def fit(self, train_data):
        Xf = []
        Y = []
        print('extracting feature...')
        for image_batch, label_batch in train_data:
#             print(self.feature(image_batch).shape)
            Xf.append(self.feature(image_batch)) 
            Y.append(label_batch.numpy())
#         print(len(Xf))
        Xf = np.concatenate(Xf, 0)
        Y = np.argmax(np.concatenate(Y, 0), 1)
#         print(Xf.shape)
#         print(Xf[:5])
#         print(Y[:5])
        with parallel_backend('loky'):
            self.classifier.fit(Xf, Y)
        return self.classifier.best_score_   
            
    def predict(self, test_data):
        Xf = []
        for image_batch in test_data:
            Xf.append(self.feature(image_batch)) 
        Xf = np.concatenate(Xf, 0)
        return self.classifier.predict_proba(Xf)
    
    def close(self):
        # release memory
        del self.feature
        K.clear_session()
        gc.collect()
    
    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.close()
#         print('closed successfully')
    
#     def __del__(self):
#         self.close()
In [14]:
record_ls = []
comb_grid = {'cnn': ['VGG16', 'InceptionResNetV2', 'MobileNetV2'],
             'classifier': ['LR', 'linearSVM', 'rbfSVM'],
#              'classifier': ['LR'],
             'dim_reductor': ['none', 'NMF_32', 'NMF_64', 'NMF_128',
                              'KPCA_32', 'KPCA_64', 'KPCA_128']}
param_combs = list(ParameterGrid(comb_grid))

for comb in param_combs:
    with HybridClassfier(**comb) as classifier:
        print('current combination: ', comb)
        record = OrderedDict()
    #     classifier = HybridClassfier(**comb)
        score = classifier.fit(trainval_dataset)

        record['cnn'] = comb['cnn']
        record['dim_reductor'] = comb['dim_reductor']
        record['classifier'] = comb['classifier']
        record['valid_acc'] = score
        record_ls.append(record)

        print('current score: ', score)

        probs = classifier.predict(test_dataset)
        sub.loc[:, 'healthy':] = probs
        sub.to_csv(os.path.join(submission_dir,
                                '{}-{}-{}.csv'\
                                .format(comb['cnn'], comb['dim_reductor'], comb['classifier'])),
                   index=False)
current combination:  {'classifier': 'LR', 'cnn': 'VGG16', 'dim_reductor': 'none'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    2.3s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    3.9s finished
current score:  0.7611049224747856
current combination:  {'classifier': 'LR', 'cnn': 'VGG16', 'dim_reductor': 'NMF_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    7.2s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   12.3s finished
current score:  0.7122354357970797
current combination:  {'classifier': 'LR', 'cnn': 'VGG16', 'dim_reductor': 'NMF_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   15.5s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   27.1s finished
current score:  0.7226780069245823
current combination:  {'classifier': 'LR', 'cnn': 'VGG16', 'dim_reductor': 'NMF_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   48.7s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  1.4min finished
current score:  0.7248848411862111
current combination:  {'classifier': 'LR', 'cnn': 'VGG16', 'dim_reductor': 'KPCA_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    2.5s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    4.2s finished
current score:  0.7166325455366552
current combination:  {'classifier': 'LR', 'cnn': 'VGG16', 'dim_reductor': 'KPCA_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    2.4s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    4.2s finished
current score:  0.7314616889959356
current combination:  {'classifier': 'LR', 'cnn': 'VGG16', 'dim_reductor': 'KPCA_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    2.5s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    4.4s finished
current score:  0.7484886346530182
current combination:  {'classifier': 'LR', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'none'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    6.5s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   11.0s finished
current score:  0.7572587686286316
current combination:  {'classifier': 'LR', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'NMF_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   19.4s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   33.4s finished
current score:  0.655131717597471
current combination:  {'classifier': 'LR', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'NMF_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   37.5s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  1.1min finished
current score:  0.6930152039741081
current combination:  {'classifier': 'LR', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'NMF_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  1.8min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  3.0min finished
current score:  0.7193858196597923
current combination:  {'classifier': 'LR', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'KPCA_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    2.9s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    5.1s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.6826042450699985
current combination:  {'classifier': 'LR', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'KPCA_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    3.0s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    5.3s finished
current score:  0.7100511816950175
current combination:  {'classifier': 'LR', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'KPCA_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    3.1s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    5.5s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.7166310401926841
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'LR', 'cnn': 'MobileNetV2', 'dim_reductor': 'none'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    4.2s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    7.6s finished
current score:  0.8627156405238597
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'LR', 'cnn': 'MobileNetV2', 'dim_reductor': 'NMF_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   14.4s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   24.7s finished
current score:  0.8434984193888304
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'LR', 'cnn': 'MobileNetV2', 'dim_reductor': 'NMF_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   28.8s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   50.1s finished
current score:  0.851729640222791
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'LR', 'cnn': 'MobileNetV2', 'dim_reductor': 'NMF_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  1.5min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  2.5min finished
current score:  0.8500933313262081
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'LR', 'cnn': 'MobileNetV2', 'dim_reductor': 'KPCA_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    2.8s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    4.9s finished
current score:  0.8226223091976517
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'LR', 'cnn': 'MobileNetV2', 'dim_reductor': 'KPCA_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    2.9s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    5.1s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.8215249134427216
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'LR', 'cnn': 'MobileNetV2', 'dim_reductor': 'KPCA_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    3.0s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    5.3s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.8292111997591449
current combination:  {'classifier': 'linearSVM', 'cnn': 'VGG16', 'dim_reductor': 'none'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   32.7s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   54.1s finished
current score:  0.7177419840433539
current combination:  {'classifier': 'linearSVM', 'cnn': 'VGG16', 'dim_reductor': 'NMF_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    9.5s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   25.5s finished
current score:  0.6474379045611922
current combination:  {'classifier': 'linearSVM', 'cnn': 'VGG16', 'dim_reductor': 'NMF_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   20.8s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  1.1min finished
current score:  0.6644558181544482
current combination:  {'classifier': 'linearSVM', 'cnn': 'VGG16', 'dim_reductor': 'NMF_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   58.1s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  1.9min finished
current score:  0.6721436098148427
current combination:  {'classifier': 'linearSVM', 'cnn': 'VGG16', 'dim_reductor': 'KPCA_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    4.4s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    8.9s finished
current score:  0.6501670931807919
current combination:  {'classifier': 'linearSVM', 'cnn': 'VGG16', 'dim_reductor': 'KPCA_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    6.7s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   13.3s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.6770871594159266
current combination:  {'classifier': 'linearSVM', 'cnn': 'VGG16', 'dim_reductor': 'KPCA_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   13.0s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   21.2s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.7105825681168148
current combination:  {'classifier': 'linearSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'none'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  2.2min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  3.7min finished
current score:  0.7380581062772844
current combination:  {'classifier': 'linearSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'NMF_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   21.8s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  3.0min finished
current score:  0.6024055396658137
current combination:  {'classifier': 'linearSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'NMF_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   40.5s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  5.5min finished
current score:  0.6403040794821616
current combination:  {'classifier': 'linearSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'NMF_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  1.9min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  8.5min finished
current score:  0.6710763209393347
current combination:  {'classifier': 'linearSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'KPCA_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    5.6s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   12.2s finished
current score:  0.6040478699382809
current combination:  {'classifier': 'linearSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'KPCA_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    7.7s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   15.6s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.646877916603944
current combination:  {'classifier': 'linearSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'KPCA_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   14.0s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   24.3s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.695194942044257
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'linearSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'none'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  1.4min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  2.3min finished
current score:  0.8555742887249738
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'linearSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'NMF_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   16.3s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  1.1min finished
current score:  0.7863811530934818
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'linearSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'NMF_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   32.2s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  2.0min finished
current score:  0.8083456269757641
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'linearSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'NMF_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  1.6min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  2.9min finished
current score:  0.8237227156405238
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'linearSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'KPCA_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    5.2s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    7.8s finished
current score:  0.7924236037934668
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'linearSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'KPCA_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    7.4s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   10.8s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.8237272316724372
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'linearSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'KPCA_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   14.0s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   19.2s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.828115309348186
current combination:  {'classifier': 'rbfSVM', 'cnn': 'VGG16', 'dim_reductor': 'none'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  1.2min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  2.0min finished
current score:  0.7720984494957097
current combination:  {'classifier': 'rbfSVM', 'cnn': 'VGG16', 'dim_reductor': 'NMF_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   11.6s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   19.5s finished
current score:  0.6985097094686136
current combination:  {'classifier': 'rbfSVM', 'cnn': 'VGG16', 'dim_reductor': 'NMF_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   23.3s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   40.0s finished
current score:  0.7056525666114707
current combination:  {'classifier': 'rbfSVM', 'cnn': 'VGG16', 'dim_reductor': 'NMF_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  1.0min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  1.8min finished
current score:  0.7325380099352702
current combination:  {'classifier': 'rbfSVM', 'cnn': 'VGG16', 'dim_reductor': 'KPCA_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    6.4s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   10.5s finished
current score:  0.73475989763661
current combination:  {'classifier': 'rbfSVM', 'cnn': 'VGG16', 'dim_reductor': 'KPCA_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    9.5s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   16.0s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.769349691404486
current combination:  {'classifier': 'rbfSVM', 'cnn': 'VGG16', 'dim_reductor': 'KPCA_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   15.7s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   27.0s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.7764805057955743
current combination:  {'classifier': 'rbfSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'none'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  3.9min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  6.4min finished
current score:  0.7424552160168598
current combination:  {'classifier': 'rbfSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'NMF_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   24.2s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   40.4s finished
current score:  0.6414120126448892
current combination:  {'classifier': 'rbfSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'NMF_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   45.3s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  1.3min finished
current score:  0.6727096191479752
current combination:  {'classifier': 'rbfSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'NMF_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  2.0min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  3.4min finished
current score:  0.7018139394851723
current combination:  {'classifier': 'rbfSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'KPCA_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    7.4s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   11.5s finished
current score:  0.6567815745897938
current combination:  {'classifier': 'rbfSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'KPCA_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   10.4s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   15.8s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.688085202468764
current combination:  {'classifier': 'rbfSVM', 'cnn': 'InceptionResNetV2', 'dim_reductor': 'KPCA_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   16.6s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   26.1s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.7061899744091524
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'rbfSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'none'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  3.2min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  5.3min finished
current score:  0.8594159265392143
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'rbfSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'NMF_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   18.7s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   30.4s finished
current score:  0.8369155502032214
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'rbfSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'NMF_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   35.3s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   59.6s finished
current score:  0.8456917055547193
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'rbfSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'NMF_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:  1.7min
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:  2.9min finished
current score:  0.8462456721360831
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'rbfSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'KPCA_32'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    6.7s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:    9.7s finished
current score:  0.8396357067589945
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'rbfSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'KPCA_64'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:    9.8s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   14.4s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.8374650007526719
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

current combination:  {'classifier': 'rbfSVM', 'cnn': 'MobileNetV2', 'dim_reductor': 'KPCA_128'}
extracting feature...
Fitting 5 folds for each of 13 candidates, totalling 65 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  34 tasks      | elapsed:   16.4s
[Parallel(n_jobs=-1)]: Done  65 out of  65 | elapsed:   27.2s finished
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/sklearn/utils/extmath.py:530: RuntimeWarning:

invalid value encountered in multiply

current score:  0.8341547493602288
In [15]:
report_df =  pd.DataFrame(record_ls)

with pd.option_context('display.max_rows', None, 'display.max_columns', None): 
    display(report_df)
    
report_df.to_csv(os.path.join('../output', f'cnn_feature_report.csv'), index=False)
cnn dim_reductor classifier valid_acc
0 VGG16 none LR 0.761105
1 VGG16 NMF_32 LR 0.712235
2 VGG16 NMF_64 LR 0.722678
3 VGG16 NMF_128 LR 0.724885
4 VGG16 KPCA_32 LR 0.716633
5 VGG16 KPCA_64 LR 0.731462
6 VGG16 KPCA_128 LR 0.748489
7 InceptionResNetV2 none LR 0.757259
8 InceptionResNetV2 NMF_32 LR 0.655132
9 InceptionResNetV2 NMF_64 LR 0.693015
10 InceptionResNetV2 NMF_128 LR 0.719386
11 InceptionResNetV2 KPCA_32 LR 0.682604
12 InceptionResNetV2 KPCA_64 LR 0.710051
13 InceptionResNetV2 KPCA_128 LR 0.716631
14 MobileNetV2 none LR 0.862716
15 MobileNetV2 NMF_32 LR 0.843498
16 MobileNetV2 NMF_64 LR 0.851730
17 MobileNetV2 NMF_128 LR 0.850093
18 MobileNetV2 KPCA_32 LR 0.822622
19 MobileNetV2 KPCA_64 LR 0.821525
20 MobileNetV2 KPCA_128 LR 0.829211
21 VGG16 none linearSVM 0.717742
22 VGG16 NMF_32 linearSVM 0.647438
23 VGG16 NMF_64 linearSVM 0.664456
24 VGG16 NMF_128 linearSVM 0.672144
25 VGG16 KPCA_32 linearSVM 0.650167
26 VGG16 KPCA_64 linearSVM 0.677087
27 VGG16 KPCA_128 linearSVM 0.710583
28 InceptionResNetV2 none linearSVM 0.738058
29 InceptionResNetV2 NMF_32 linearSVM 0.602406
30 InceptionResNetV2 NMF_64 linearSVM 0.640304
31 InceptionResNetV2 NMF_128 linearSVM 0.671076
32 InceptionResNetV2 KPCA_32 linearSVM 0.604048
33 InceptionResNetV2 KPCA_64 linearSVM 0.646878
34 InceptionResNetV2 KPCA_128 linearSVM 0.695195
35 MobileNetV2 none linearSVM 0.855574
36 MobileNetV2 NMF_32 linearSVM 0.786381
37 MobileNetV2 NMF_64 linearSVM 0.808346
38 MobileNetV2 NMF_128 linearSVM 0.823723
39 MobileNetV2 KPCA_32 linearSVM 0.792424
40 MobileNetV2 KPCA_64 linearSVM 0.823727
41 MobileNetV2 KPCA_128 linearSVM 0.828115
42 VGG16 none rbfSVM 0.772098
43 VGG16 NMF_32 rbfSVM 0.698510
44 VGG16 NMF_64 rbfSVM 0.705653
45 VGG16 NMF_128 rbfSVM 0.732538
46 VGG16 KPCA_32 rbfSVM 0.734760
47 VGG16 KPCA_64 rbfSVM 0.769350
48 VGG16 KPCA_128 rbfSVM 0.776481
49 InceptionResNetV2 none rbfSVM 0.742455
50 InceptionResNetV2 NMF_32 rbfSVM 0.641412
51 InceptionResNetV2 NMF_64 rbfSVM 0.672710
52 InceptionResNetV2 NMF_128 rbfSVM 0.701814
53 InceptionResNetV2 KPCA_32 rbfSVM 0.656782
54 InceptionResNetV2 KPCA_64 rbfSVM 0.688085
55 InceptionResNetV2 KPCA_128 rbfSVM 0.706190
56 MobileNetV2 none rbfSVM 0.859416
57 MobileNetV2 NMF_32 rbfSVM 0.836916
58 MobileNetV2 NMF_64 rbfSVM 0.845692
59 MobileNetV2 NMF_128 rbfSVM 0.846246
60 MobileNetV2 KPCA_32 rbfSVM 0.839636
61 MobileNetV2 KPCA_64 rbfSVM 0.837465
62 MobileNetV2 KPCA_128 rbfSVM 0.834155

Using deep feature, we get up to 0.862716 validation accuracy, better than we got with BoW feature. To further improve performance, we can try finetuning the whole CNN so that the feature can adapt for this task.

End-to-end finetuned CNN

CNNs to be finetuned:

  • ResNet101V2
  • VGG16
  • InceptionResNetV2
  • MobileNetV2

The first three models are heavy architectures, while the last, MobileNetV2 is light-weight. All available pretrained CNNs are here: Module: tf.keras.applications

learning rate scheduler

In [16]:
LR_START = 0.0001
LR_MAX = 0.00005 * 8
LR_MIN = 0.0001
LR_RAMPUP_EPOCHS = 4
LR_SUSTAIN_EPOCHS = 6
LR_EXP_DECAY = .8

def lrfn(epoch):
    if epoch < LR_RAMPUP_EPOCHS:
        lr = (LR_MAX - LR_START) / LR_RAMPUP_EPOCHS * epoch + LR_START
    elif epoch < LR_RAMPUP_EPOCHS + LR_SUSTAIN_EPOCHS:
        lr = LR_MAX
    else:
        lr = (LR_MAX - LR_MIN) * LR_EXP_DECAY**(epoch - LR_RAMPUP_EPOCHS - LR_SUSTAIN_EPOCHS) + LR_MIN
    return lr
    
lr_callback = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose=True)

rng = [i for i in range(EPOCHS)]
y = [lrfn(x) for x in rng]
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
Learning rate schedule: 0.0001 to 0.0004 to 0.0001

finetune models

In [17]:
record_ls = []

for cnn in ['ResNet101V2', 'VGG16', 'InceptionResNetV2', 'MobileNetV2']:
    print(f'Finitune {cnn}...')
    record = OrderedDict()
    
    with strategy.scope():
        # build model
        backbone = get_backbone(cnn)
        model =  tf.keras.Sequential([
            backbone,
            L.GlobalMaxPooling2D(),
#             L.Dropout(0.3),
            L.Dense(4, activation='softmax')
            ])
        model.compile(
            optimizer = 'adam',
            loss = 'categorical_crossentropy',
            metrics=['categorical_accuracy']
            )
        model.summary()
        
        
        ckpt_path = os.path.join(ckpt_dir,
                                 f'finetuned_{cnn}.h5')
        checkpoint = tf.keras.callbacks.ModelCheckpoint(
            ckpt_path,
            verbose=1,
            monitor='val_categorical_accuracy',
            save_best_only=True,
            mode='auto') 

        STEPS_PER_EPOCH = train_labels.shape[0] // BATCH_SIZE
        history = model.fit(
            train_dataset, 
            epochs=EPOCHS, 
            callbacks=[lr_callback, checkpoint],
            steps_per_epoch=STEPS_PER_EPOCH,
            validation_data=valid_dataset
            )
        
        # display training curves
        display_training_curves(
            history.history['loss'], 
            history.history['val_loss'], 
            'loss', 211)
        display_training_curves(
            history.history['categorical_accuracy'], 
            history.history['val_categorical_accuracy'], 
            'accuracy', 212)
        plt.show()
        
        record['model'] = f'finetuned_{cnn}'
        best_idx = np.argmax(history.history['val_categorical_accuracy'])
        record['train_loss'] = history.history['loss'][best_idx]
        record['valid_loss'] = history.history['val_loss'][best_idx]
        record['train_acc'] = history.history['categorical_accuracy'][best_idx]
        record['valid_acc'] = history.history['val_categorical_accuracy'][best_idx]
        record_ls.append(record)
        
        # run testing with best model weights
        model.load_weights(ckpt_path)
        
        print('record: ', record['valid_loss'], record['valid_acc'])
#         val_loss, val_acc = model.evaluate(valid_dataset)
#         print('confirmation: ', val_loss, val_acc)
        
        print('Start inference on test dataset.')
        probs = model.predict(test_dataset, verbose=1)
        sub.loc[:, 'healthy':] = probs
        sub.to_csv(os.path.join(submission_dir, f'finetune_{cnn}.csv'), index=False)
#         sub.head()
    
    # release memory
    # https://forums.fast.ai/t/how-could-i-release-gpu-memory-of-keras/2023/19
    del model
    K.clear_session()
    gc.collect()
Finitune ResNet101V2...
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
resnet101v2 (Model)          (None, 13, 13, 2048)      42626560  
_________________________________________________________________
global_max_pooling2d (Global (None, 2048)              0         
_________________________________________________________________
dense (Dense)                (None, 4)                 8196      
=================================================================
Total params: 42,634,756
Trainable params: 42,537,092
Non-trainable params: 97,664
_________________________________________________________________
Train for 96 steps, validate for 18 steps

Epoch 00001: LearningRateScheduler reducing learning rate to 0.0001.
Epoch 1/40
95/96 [============================>.] - ETA: 0s - loss: 2.5841 - categorical_accuracy: 0.7191
Epoch 00001: val_categorical_accuracy improved from -inf to 0.65693, saving model to ../output/best_models/finetuned_ResNet101V2.h5
96/96 [==============================] - 50s 520ms/step - loss: 2.5672 - categorical_accuracy: 0.7201 - val_loss: 2.9635 - val_categorical_accuracy: 0.6569

Epoch 00002: LearningRateScheduler reducing learning rate to 0.00017500000000000003.
Epoch 2/40
95/96 [============================>.] - ETA: 0s - loss: 1.4904 - categorical_accuracy: 0.8283
Epoch 00002: val_categorical_accuracy improved from 0.65693 to 0.85036, saving model to ../output/best_models/finetuned_ResNet101V2.h5
96/96 [==============================] - 38s 397ms/step - loss: 1.4877 - categorical_accuracy: 0.8281 - val_loss: 1.0664 - val_categorical_accuracy: 0.8504

Epoch 00003: LearningRateScheduler reducing learning rate to 0.00025.
Epoch 3/40
95/96 [============================>.] - ETA: 0s - loss: 1.1897 - categorical_accuracy: 0.8428
Epoch 00003: val_categorical_accuracy improved from 0.85036 to 0.87956, saving model to ../output/best_models/finetuned_ResNet101V2.h5
96/96 [==============================] - 38s 398ms/step - loss: 1.1808 - categorical_accuracy: 0.8438 - val_loss: 1.5303 - val_categorical_accuracy: 0.8796

Epoch 00004: LearningRateScheduler reducing learning rate to 0.00032500000000000004.
Epoch 4/40
95/96 [============================>.] - ETA: 0s - loss: 1.1198 - categorical_accuracy: 0.8553
Epoch 00004: val_categorical_accuracy did not improve from 0.87956
96/96 [==============================] - 37s 381ms/step - loss: 1.1189 - categorical_accuracy: 0.8555 - val_loss: 7.4856 - val_categorical_accuracy: 0.6825

Epoch 00005: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 5/40
95/96 [============================>.] - ETA: 0s - loss: 0.9448 - categorical_accuracy: 0.8533
Epoch 00005: val_categorical_accuracy did not improve from 0.87956
96/96 [==============================] - 37s 382ms/step - loss: 0.9446 - categorical_accuracy: 0.8542 - val_loss: 7.8283 - val_categorical_accuracy: 0.6241

Epoch 00006: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 6/40
95/96 [============================>.] - ETA: 0s - loss: 0.3054 - categorical_accuracy: 0.9079
Epoch 00006: val_categorical_accuracy improved from 0.87956 to 0.93431, saving model to ../output/best_models/finetuned_ResNet101V2.h5
96/96 [==============================] - 39s 402ms/step - loss: 0.3053 - categorical_accuracy: 0.9082 - val_loss: 0.2360 - val_categorical_accuracy: 0.9343

Epoch 00007: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 7/40
95/96 [============================>.] - ETA: 0s - loss: 0.1734 - categorical_accuracy: 0.9507
Epoch 00007: val_categorical_accuracy did not improve from 0.93431
96/96 [==============================] - 37s 383ms/step - loss: 0.1716 - categorical_accuracy: 0.9512 - val_loss: 0.2218 - val_categorical_accuracy: 0.9197

Epoch 00008: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 8/40
95/96 [============================>.] - ETA: 0s - loss: 0.1512 - categorical_accuracy: 0.9586
Epoch 00008: val_categorical_accuracy improved from 0.93431 to 0.95620, saving model to ../output/best_models/finetuned_ResNet101V2.h5
96/96 [==============================] - 39s 403ms/step - loss: 0.1500 - categorical_accuracy: 0.9590 - val_loss: 0.1275 - val_categorical_accuracy: 0.9562

Epoch 00009: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 9/40
95/96 [============================>.] - ETA: 0s - loss: 0.0810 - categorical_accuracy: 0.9730
Epoch 00009: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 37s 384ms/step - loss: 0.0803 - categorical_accuracy: 0.9733 - val_loss: 0.0972 - val_categorical_accuracy: 0.9526

Epoch 00010: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 10/40
95/96 [============================>.] - ETA: 0s - loss: 0.1450 - categorical_accuracy: 0.9625
Epoch 00010: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 37s 384ms/step - loss: 0.1472 - categorical_accuracy: 0.9622 - val_loss: 0.4845 - val_categorical_accuracy: 0.8832

Epoch 00011: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 11/40
95/96 [============================>.] - ETA: 0s - loss: 0.0829 - categorical_accuracy: 0.9750
Epoch 00011: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 37s 384ms/step - loss: 0.0868 - categorical_accuracy: 0.9746 - val_loss: 0.1752 - val_categorical_accuracy: 0.9526

Epoch 00012: LearningRateScheduler reducing learning rate to 0.00034.
Epoch 12/40
95/96 [============================>.] - ETA: 0s - loss: 0.0838 - categorical_accuracy: 0.9789
Epoch 00012: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 37s 384ms/step - loss: 0.0833 - categorical_accuracy: 0.9792 - val_loss: 0.1815 - val_categorical_accuracy: 0.9489

Epoch 00013: LearningRateScheduler reducing learning rate to 0.00029200000000000005.
Epoch 13/40
95/96 [============================>.] - ETA: 0s - loss: 0.0528 - categorical_accuracy: 0.9822
Epoch 00013: val_categorical_accuracy improved from 0.95620 to 0.96350, saving model to ../output/best_models/finetuned_ResNet101V2.h5
96/96 [==============================] - 39s 403ms/step - loss: 0.0528 - categorical_accuracy: 0.9818 - val_loss: 0.1341 - val_categorical_accuracy: 0.9635

Epoch 00014: LearningRateScheduler reducing learning rate to 0.00025360000000000004.
Epoch 14/40
95/96 [============================>.] - ETA: 0s - loss: 0.0249 - categorical_accuracy: 0.9947
Epoch 00014: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 37s 385ms/step - loss: 0.0249 - categorical_accuracy: 0.9948 - val_loss: 0.1121 - val_categorical_accuracy: 0.9599

Epoch 00015: LearningRateScheduler reducing learning rate to 0.00022288000000000006.
Epoch 15/40
95/96 [============================>.] - ETA: 0s - loss: 0.0267 - categorical_accuracy: 0.9954
Epoch 00015: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 37s 384ms/step - loss: 0.0264 - categorical_accuracy: 0.9954 - val_loss: 0.1299 - val_categorical_accuracy: 0.9562

Epoch 00016: LearningRateScheduler reducing learning rate to 0.00019830400000000006.
Epoch 16/40
95/96 [============================>.] - ETA: 0s - loss: 0.0286 - categorical_accuracy: 0.9928
Epoch 00016: val_categorical_accuracy improved from 0.96350 to 0.97445, saving model to ../output/best_models/finetuned_ResNet101V2.h5
96/96 [==============================] - 39s 403ms/step - loss: 0.0283 - categorical_accuracy: 0.9928 - val_loss: 0.1199 - val_categorical_accuracy: 0.9745

Epoch 00017: LearningRateScheduler reducing learning rate to 0.00017864320000000004.
Epoch 17/40
95/96 [============================>.] - ETA: 0s - loss: 0.0159 - categorical_accuracy: 0.9961
Epoch 00017: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0158 - categorical_accuracy: 0.9961 - val_loss: 0.1492 - val_categorical_accuracy: 0.9635

Epoch 00018: LearningRateScheduler reducing learning rate to 0.00016291456000000005.
Epoch 18/40
95/96 [============================>.] - ETA: 0s - loss: 0.0199 - categorical_accuracy: 0.9947
Epoch 00018: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0197 - categorical_accuracy: 0.9948 - val_loss: 0.1196 - val_categorical_accuracy: 0.9672

Epoch 00019: LearningRateScheduler reducing learning rate to 0.00015033164800000003.
Epoch 19/40
95/96 [============================>.] - ETA: 0s - loss: 0.0094 - categorical_accuracy: 0.9954
Epoch 00019: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0093 - categorical_accuracy: 0.9954 - val_loss: 0.1257 - val_categorical_accuracy: 0.9708

Epoch 00020: LearningRateScheduler reducing learning rate to 0.00014026531840000004.
Epoch 20/40
95/96 [============================>.] - ETA: 0s - loss: 0.0040 - categorical_accuracy: 0.9987
Epoch 00020: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0042 - categorical_accuracy: 0.9987 - val_loss: 0.1172 - val_categorical_accuracy: 0.9599

Epoch 00021: LearningRateScheduler reducing learning rate to 0.00013221225472000002.
Epoch 21/40
95/96 [============================>.] - ETA: 0s - loss: 0.0283 - categorical_accuracy: 0.9941
Epoch 00021: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0280 - categorical_accuracy: 0.9941 - val_loss: 0.1727 - val_categorical_accuracy: 0.9489

Epoch 00022: LearningRateScheduler reducing learning rate to 0.00012576980377600002.
Epoch 22/40
95/96 [============================>.] - ETA: 0s - loss: 0.0037 - categorical_accuracy: 0.9993
Epoch 00022: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0037 - categorical_accuracy: 0.9993 - val_loss: 0.1428 - val_categorical_accuracy: 0.9599

Epoch 00023: LearningRateScheduler reducing learning rate to 0.00012061584302080001.
Epoch 23/40
95/96 [============================>.] - ETA: 0s - loss: 0.0078 - categorical_accuracy: 0.9980
Epoch 00023: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0077 - categorical_accuracy: 0.9980 - val_loss: 0.1410 - val_categorical_accuracy: 0.9526

Epoch 00024: LearningRateScheduler reducing learning rate to 0.00011649267441664002.
Epoch 24/40
95/96 [============================>.] - ETA: 0s - loss: 0.0108 - categorical_accuracy: 0.9974
Epoch 00024: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0107 - categorical_accuracy: 0.9974 - val_loss: 0.1579 - val_categorical_accuracy: 0.9635

Epoch 00025: LearningRateScheduler reducing learning rate to 0.00011319413953331202.
Epoch 25/40
95/96 [============================>.] - ETA: 0s - loss: 0.0162 - categorical_accuracy: 0.9947
Epoch 00025: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0161 - categorical_accuracy: 0.9948 - val_loss: 0.1405 - val_categorical_accuracy: 0.9562

Epoch 00026: LearningRateScheduler reducing learning rate to 0.00011055531162664962.
Epoch 26/40
95/96 [============================>.] - ETA: 0s - loss: 0.0139 - categorical_accuracy: 0.9967
Epoch 00026: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0138 - categorical_accuracy: 0.9967 - val_loss: 0.1492 - val_categorical_accuracy: 0.9562

Epoch 00027: LearningRateScheduler reducing learning rate to 0.0001084442493013197.
Epoch 27/40
95/96 [============================>.] - ETA: 0s - loss: 0.0063 - categorical_accuracy: 0.9967
Epoch 00027: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0063 - categorical_accuracy: 0.9967 - val_loss: 0.1635 - val_categorical_accuracy: 0.9599

Epoch 00028: LearningRateScheduler reducing learning rate to 0.00010675539944105576.
Epoch 28/40
95/96 [============================>.] - ETA: 0s - loss: 0.0084 - categorical_accuracy: 0.9980
Epoch 00028: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0083 - categorical_accuracy: 0.9980 - val_loss: 0.1369 - val_categorical_accuracy: 0.9526

Epoch 00029: LearningRateScheduler reducing learning rate to 0.0001054043195528446.
Epoch 29/40
95/96 [============================>.] - ETA: 0s - loss: 0.0087 - categorical_accuracy: 0.9961
Epoch 00029: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0087 - categorical_accuracy: 0.9961 - val_loss: 0.1645 - val_categorical_accuracy: 0.9453

Epoch 00030: LearningRateScheduler reducing learning rate to 0.00010432345564227568.
Epoch 30/40
95/96 [============================>.] - ETA: 0s - loss: 0.0135 - categorical_accuracy: 0.9954
Epoch 00030: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0135 - categorical_accuracy: 0.9954 - val_loss: 0.2080 - val_categorical_accuracy: 0.9562

Epoch 00031: LearningRateScheduler reducing learning rate to 0.00010345876451382055.
Epoch 31/40
95/96 [============================>.] - ETA: 0s - loss: 0.0081 - categorical_accuracy: 0.9993
Epoch 00031: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0080 - categorical_accuracy: 0.9993 - val_loss: 0.1706 - val_categorical_accuracy: 0.9526

Epoch 00032: LearningRateScheduler reducing learning rate to 0.00010276701161105644.
Epoch 32/40
95/96 [============================>.] - ETA: 0s - loss: 0.0030 - categorical_accuracy: 0.9993
Epoch 00032: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0030 - categorical_accuracy: 0.9993 - val_loss: 0.1446 - val_categorical_accuracy: 0.9562

Epoch 00033: LearningRateScheduler reducing learning rate to 0.00010221360928884516.
Epoch 33/40
95/96 [============================>.] - ETA: 0s - loss: 0.0016 - categorical_accuracy: 1.0000
Epoch 00033: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0015 - categorical_accuracy: 1.0000 - val_loss: 0.1600 - val_categorical_accuracy: 0.9562

Epoch 00034: LearningRateScheduler reducing learning rate to 0.00010177088743107613.
Epoch 34/40
95/96 [============================>.] - ETA: 0s - loss: 0.0035 - categorical_accuracy: 0.9993
Epoch 00034: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0035 - categorical_accuracy: 0.9993 - val_loss: 0.2315 - val_categorical_accuracy: 0.9635

Epoch 00035: LearningRateScheduler reducing learning rate to 0.0001014167099448609.
Epoch 35/40
95/96 [============================>.] - ETA: 0s - loss: 0.0177 - categorical_accuracy: 0.9980
Epoch 00035: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0175 - categorical_accuracy: 0.9980 - val_loss: 0.2024 - val_categorical_accuracy: 0.9635

Epoch 00036: LearningRateScheduler reducing learning rate to 0.00010113336795588872.
Epoch 36/40
95/96 [============================>.] - ETA: 0s - loss: 0.0024 - categorical_accuracy: 0.9993
Epoch 00036: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0025 - categorical_accuracy: 0.9993 - val_loss: 0.2387 - val_categorical_accuracy: 0.9380

Epoch 00037: LearningRateScheduler reducing learning rate to 0.00010090669436471098.
Epoch 37/40
95/96 [============================>.] - ETA: 0s - loss: 0.0183 - categorical_accuracy: 0.9980
Epoch 00037: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0181 - categorical_accuracy: 0.9980 - val_loss: 0.2213 - val_categorical_accuracy: 0.9489

Epoch 00038: LearningRateScheduler reducing learning rate to 0.00010072535549176879.
Epoch 38/40
95/96 [============================>.] - ETA: 0s - loss: 0.0216 - categorical_accuracy: 0.9954
Epoch 00038: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0214 - categorical_accuracy: 0.9954 - val_loss: 0.2220 - val_categorical_accuracy: 0.9453

Epoch 00039: LearningRateScheduler reducing learning rate to 0.00010058028439341503.
Epoch 39/40
95/96 [============================>.] - ETA: 0s - loss: 0.0070 - categorical_accuracy: 0.9974
Epoch 00039: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0071 - categorical_accuracy: 0.9974 - val_loss: 0.2101 - val_categorical_accuracy: 0.9380

Epoch 00040: LearningRateScheduler reducing learning rate to 0.00010046422751473202.
Epoch 40/40
95/96 [============================>.] - ETA: 0s - loss: 0.0196 - categorical_accuracy: 0.9947
Epoch 00040: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 37s 384ms/step - loss: 0.0195 - categorical_accuracy: 0.9948 - val_loss: 0.1800 - val_categorical_accuracy: 0.9599
record:  0.11987132637204923 0.97445256
Start inference on test dataset.
114/114 [==============================] - 15s 135ms/step
Finitune VGG16...
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
vgg16 (Model)                (None, 12, 12, 512)       14714688  
_________________________________________________________________
global_max_pooling2d (Global (None, 512)               0         
_________________________________________________________________
dense (Dense)                (None, 4)                 2052      
=================================================================
Total params: 14,716,740
Trainable params: 14,716,740
Non-trainable params: 0
_________________________________________________________________
Train for 96 steps, validate for 18 steps

Epoch 00001: LearningRateScheduler reducing learning rate to 0.0001.
Epoch 1/40
95/96 [============================>.] - ETA: 0s - loss: 1.1890 - categorical_accuracy: 0.4651
Epoch 00001: val_categorical_accuracy improved from -inf to 0.47445, saving model to ../output/best_models/finetuned_VGG16.h5
96/96 [==============================] - 49s 514ms/step - loss: 1.1869 - categorical_accuracy: 0.4674 - val_loss: 1.1077 - val_categorical_accuracy: 0.4745

Epoch 00002: LearningRateScheduler reducing learning rate to 0.00017500000000000003.
Epoch 2/40
95/96 [============================>.] - ETA: 0s - loss: 0.5640 - categorical_accuracy: 0.8145
Epoch 00002: val_categorical_accuracy improved from 0.47445 to 0.86496, saving model to ../output/best_models/finetuned_VGG16.h5
96/96 [==============================] - 49s 505ms/step - loss: 0.5619 - categorical_accuracy: 0.8151 - val_loss: 0.3319 - val_categorical_accuracy: 0.8650

Epoch 00003: LearningRateScheduler reducing learning rate to 0.00025.
Epoch 3/40
95/96 [============================>.] - ETA: 0s - loss: 0.4806 - categorical_accuracy: 0.8467
Epoch 00003: val_categorical_accuracy improved from 0.86496 to 0.90146, saving model to ../output/best_models/finetuned_VGG16.h5
96/96 [==============================] - 48s 504ms/step - loss: 0.4797 - categorical_accuracy: 0.8470 - val_loss: 0.3322 - val_categorical_accuracy: 0.9015

Epoch 00004: LearningRateScheduler reducing learning rate to 0.00032500000000000004.
Epoch 4/40
95/96 [============================>.] - ETA: 0s - loss: 0.3376 - categorical_accuracy: 0.8954
Epoch 00004: val_categorical_accuracy did not improve from 0.90146
96/96 [==============================] - 48s 498ms/step - loss: 0.3351 - categorical_accuracy: 0.8965 - val_loss: 0.3604 - val_categorical_accuracy: 0.8796

Epoch 00005: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 5/40
95/96 [============================>.] - ETA: 0s - loss: 0.3759 - categorical_accuracy: 0.8789
Epoch 00005: val_categorical_accuracy did not improve from 0.90146
96/96 [==============================] - 48s 498ms/step - loss: 0.3798 - categorical_accuracy: 0.8770 - val_loss: 0.7596 - val_categorical_accuracy: 0.6642

Epoch 00006: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 6/40
95/96 [============================>.] - ETA: 0s - loss: 0.5842 - categorical_accuracy: 0.8020
Epoch 00006: val_categorical_accuracy improved from 0.90146 to 0.91971, saving model to ../output/best_models/finetuned_VGG16.h5
96/96 [==============================] - 48s 502ms/step - loss: 0.5794 - categorical_accuracy: 0.8040 - val_loss: 0.2597 - val_categorical_accuracy: 0.9197

Epoch 00007: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 7/40
95/96 [============================>.] - ETA: 0s - loss: 0.3115 - categorical_accuracy: 0.8987
Epoch 00007: val_categorical_accuracy did not improve from 0.91971
96/96 [==============================] - 48s 496ms/step - loss: 0.3168 - categorical_accuracy: 0.8971 - val_loss: 0.4096 - val_categorical_accuracy: 0.8321

Epoch 00008: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 8/40
95/96 [============================>.] - ETA: 0s - loss: 0.2345 - categorical_accuracy: 0.9250
Epoch 00008: val_categorical_accuracy did not improve from 0.91971
96/96 [==============================] - 48s 496ms/step - loss: 0.2328 - categorical_accuracy: 0.9258 - val_loss: 0.5227 - val_categorical_accuracy: 0.8832

Epoch 00009: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 9/40
95/96 [============================>.] - ETA: 0s - loss: 0.2639 - categorical_accuracy: 0.9184
Epoch 00009: val_categorical_accuracy improved from 0.91971 to 0.94526, saving model to ../output/best_models/finetuned_VGG16.h5
96/96 [==============================] - 48s 501ms/step - loss: 0.2616 - categorical_accuracy: 0.9193 - val_loss: 0.1603 - val_categorical_accuracy: 0.9453

Epoch 00010: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 10/40
95/96 [============================>.] - ETA: 0s - loss: 0.2198 - categorical_accuracy: 0.9329
Epoch 00010: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 48s 496ms/step - loss: 0.2181 - categorical_accuracy: 0.9336 - val_loss: 0.2336 - val_categorical_accuracy: 0.9270

Epoch 00011: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 11/40
95/96 [============================>.] - ETA: 0s - loss: 0.1849 - categorical_accuracy: 0.9441
Epoch 00011: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 48s 496ms/step - loss: 0.1909 - categorical_accuracy: 0.9421 - val_loss: 0.3378 - val_categorical_accuracy: 0.9197

Epoch 00012: LearningRateScheduler reducing learning rate to 0.00034.
Epoch 12/40
95/96 [============================>.] - ETA: 0s - loss: 0.2134 - categorical_accuracy: 0.9322
Epoch 00012: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 48s 496ms/step - loss: 0.2119 - categorical_accuracy: 0.9329 - val_loss: 0.1881 - val_categorical_accuracy: 0.9307

Epoch 00013: LearningRateScheduler reducing learning rate to 0.00029200000000000005.
Epoch 13/40
95/96 [============================>.] - ETA: 0s - loss: 0.1599 - categorical_accuracy: 0.9408
Epoch 00013: val_categorical_accuracy improved from 0.94526 to 0.95985, saving model to ../output/best_models/finetuned_VGG16.h5
96/96 [==============================] - 48s 501ms/step - loss: 0.1621 - categorical_accuracy: 0.9395 - val_loss: 0.1481 - val_categorical_accuracy: 0.9599

Epoch 00014: LearningRateScheduler reducing learning rate to 0.00025360000000000004.
Epoch 14/40
95/96 [============================>.] - ETA: 0s - loss: 0.1016 - categorical_accuracy: 0.9658
Epoch 00014: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.1008 - categorical_accuracy: 0.9661 - val_loss: 0.1698 - val_categorical_accuracy: 0.9380

Epoch 00015: LearningRateScheduler reducing learning rate to 0.00022288000000000006.
Epoch 15/40
95/96 [============================>.] - ETA: 0s - loss: 0.1172 - categorical_accuracy: 0.9645
Epoch 00015: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 495ms/step - loss: 0.1165 - categorical_accuracy: 0.9648 - val_loss: 0.1611 - val_categorical_accuracy: 0.9270

Epoch 00016: LearningRateScheduler reducing learning rate to 0.00019830400000000006.
Epoch 16/40
95/96 [============================>.] - ETA: 0s - loss: 0.0998 - categorical_accuracy: 0.9638
Epoch 00016: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 495ms/step - loss: 0.0997 - categorical_accuracy: 0.9635 - val_loss: 0.3696 - val_categorical_accuracy: 0.9234

Epoch 00017: LearningRateScheduler reducing learning rate to 0.00017864320000000004.
Epoch 17/40
95/96 [============================>.] - ETA: 0s - loss: 0.1068 - categorical_accuracy: 0.9711
Epoch 00017: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.1091 - categorical_accuracy: 0.9707 - val_loss: 0.2173 - val_categorical_accuracy: 0.9453

Epoch 00018: LearningRateScheduler reducing learning rate to 0.00016291456000000005.
Epoch 18/40
95/96 [============================>.] - ETA: 0s - loss: 0.0743 - categorical_accuracy: 0.9737
Epoch 00018: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.0738 - categorical_accuracy: 0.9740 - val_loss: 0.1756 - val_categorical_accuracy: 0.9380

Epoch 00019: LearningRateScheduler reducing learning rate to 0.00015033164800000003.
Epoch 19/40
95/96 [============================>.] - ETA: 0s - loss: 0.0737 - categorical_accuracy: 0.9776
Epoch 00019: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 495ms/step - loss: 0.0731 - categorical_accuracy: 0.9779 - val_loss: 0.1631 - val_categorical_accuracy: 0.9380

Epoch 00020: LearningRateScheduler reducing learning rate to 0.00014026531840000004.
Epoch 20/40
95/96 [============================>.] - ETA: 0s - loss: 0.0631 - categorical_accuracy: 0.9763
Epoch 00020: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.0625 - categorical_accuracy: 0.9766 - val_loss: 0.1316 - val_categorical_accuracy: 0.9562

Epoch 00021: LearningRateScheduler reducing learning rate to 0.00013221225472000002.
Epoch 21/40
95/96 [============================>.] - ETA: 0s - loss: 0.0584 - categorical_accuracy: 0.9809
Epoch 00021: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.0580 - categorical_accuracy: 0.9811 - val_loss: 0.1375 - val_categorical_accuracy: 0.9489

Epoch 00022: LearningRateScheduler reducing learning rate to 0.00012576980377600002.
Epoch 22/40
95/96 [============================>.] - ETA: 0s - loss: 0.0404 - categorical_accuracy: 0.9882
Epoch 00022: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.0418 - categorical_accuracy: 0.9870 - val_loss: 0.2171 - val_categorical_accuracy: 0.9380

Epoch 00023: LearningRateScheduler reducing learning rate to 0.00012061584302080001.
Epoch 23/40
95/96 [============================>.] - ETA: 0s - loss: 0.0760 - categorical_accuracy: 0.9783
Epoch 00023: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.0754 - categorical_accuracy: 0.9785 - val_loss: 0.2409 - val_categorical_accuracy: 0.9197

Epoch 00024: LearningRateScheduler reducing learning rate to 0.00011649267441664002.
Epoch 24/40
95/96 [============================>.] - ETA: 0s - loss: 0.0463 - categorical_accuracy: 0.9829
Epoch 00024: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.0463 - categorical_accuracy: 0.9831 - val_loss: 0.1496 - val_categorical_accuracy: 0.9489

Epoch 00025: LearningRateScheduler reducing learning rate to 0.00011319413953331202.
Epoch 25/40
95/96 [============================>.] - ETA: 0s - loss: 0.0228 - categorical_accuracy: 0.9921
Epoch 00025: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.0226 - categorical_accuracy: 0.9922 - val_loss: 0.1503 - val_categorical_accuracy: 0.9526

Epoch 00026: LearningRateScheduler reducing learning rate to 0.00011055531162664962.
Epoch 26/40
95/96 [============================>.] - ETA: 0s - loss: 0.0380 - categorical_accuracy: 0.9908
Epoch 00026: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.0384 - categorical_accuracy: 0.9902 - val_loss: 0.1456 - val_categorical_accuracy: 0.9599

Epoch 00027: LearningRateScheduler reducing learning rate to 0.0001084442493013197.
Epoch 27/40
95/96 [============================>.] - ETA: 0s - loss: 0.0266 - categorical_accuracy: 0.9928
Epoch 00027: val_categorical_accuracy did not improve from 0.95985
96/96 [==============================] - 48s 496ms/step - loss: 0.0263 - categorical_accuracy: 0.9928 - val_loss: 0.1678 - val_categorical_accuracy: 0.9562

Epoch 00028: LearningRateScheduler reducing learning rate to 0.00010675539944105576.
Epoch 28/40
95/96 [============================>.] - ETA: 0s - loss: 0.0137 - categorical_accuracy: 0.9961
Epoch 00028: val_categorical_accuracy improved from 0.95985 to 0.96350, saving model to ../output/best_models/finetuned_VGG16.h5
96/96 [==============================] - 48s 501ms/step - loss: 0.0142 - categorical_accuracy: 0.9954 - val_loss: 0.1424 - val_categorical_accuracy: 0.9635

Epoch 00029: LearningRateScheduler reducing learning rate to 0.0001054043195528446.
Epoch 29/40
95/96 [============================>.] - ETA: 0s - loss: 0.0110 - categorical_accuracy: 0.9980
Epoch 00029: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0109 - categorical_accuracy: 0.9980 - val_loss: 0.1499 - val_categorical_accuracy: 0.9635

Epoch 00030: LearningRateScheduler reducing learning rate to 0.00010432345564227568.
Epoch 30/40
95/96 [============================>.] - ETA: 0s - loss: 0.0161 - categorical_accuracy: 0.9941
Epoch 00030: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0160 - categorical_accuracy: 0.9941 - val_loss: 0.1841 - val_categorical_accuracy: 0.9526

Epoch 00031: LearningRateScheduler reducing learning rate to 0.00010345876451382055.
Epoch 31/40
95/96 [============================>.] - ETA: 0s - loss: 0.0305 - categorical_accuracy: 0.9908
Epoch 00031: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0302 - categorical_accuracy: 0.9909 - val_loss: 0.2942 - val_categorical_accuracy: 0.9307

Epoch 00032: LearningRateScheduler reducing learning rate to 0.00010276701161105644.
Epoch 32/40
95/96 [============================>.] - ETA: 0s - loss: 0.0139 - categorical_accuracy: 0.9967
Epoch 00032: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0144 - categorical_accuracy: 0.9961 - val_loss: 0.3465 - val_categorical_accuracy: 0.9270

Epoch 00033: LearningRateScheduler reducing learning rate to 0.00010221360928884516.
Epoch 33/40
95/96 [============================>.] - ETA: 0s - loss: 0.0203 - categorical_accuracy: 0.9928
Epoch 00033: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0201 - categorical_accuracy: 0.9928 - val_loss: 0.3595 - val_categorical_accuracy: 0.9343

Epoch 00034: LearningRateScheduler reducing learning rate to 0.00010177088743107613.
Epoch 34/40
95/96 [============================>.] - ETA: 0s - loss: 0.0367 - categorical_accuracy: 0.9875
Epoch 00034: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0372 - categorical_accuracy: 0.9870 - val_loss: 0.2697 - val_categorical_accuracy: 0.9380

Epoch 00035: LearningRateScheduler reducing learning rate to 0.0001014167099448609.
Epoch 35/40
95/96 [============================>.] - ETA: 0s - loss: 0.0445 - categorical_accuracy: 0.9855
Epoch 00035: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0472 - categorical_accuracy: 0.9844 - val_loss: 0.2040 - val_categorical_accuracy: 0.9416

Epoch 00036: LearningRateScheduler reducing learning rate to 0.00010113336795588872.
Epoch 36/40
95/96 [============================>.] - ETA: 0s - loss: 0.0465 - categorical_accuracy: 0.9875
Epoch 00036: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0460 - categorical_accuracy: 0.9876 - val_loss: 0.3963 - val_categorical_accuracy: 0.9416

Epoch 00037: LearningRateScheduler reducing learning rate to 0.00010090669436471098.
Epoch 37/40
95/96 [============================>.] - ETA: 0s - loss: 0.0402 - categorical_accuracy: 0.9868
Epoch 00037: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0398 - categorical_accuracy: 0.9870 - val_loss: 0.2234 - val_categorical_accuracy: 0.9489

Epoch 00038: LearningRateScheduler reducing learning rate to 0.00010072535549176879.
Epoch 38/40
95/96 [============================>.] - ETA: 0s - loss: 0.0514 - categorical_accuracy: 0.9822
Epoch 00038: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0511 - categorical_accuracy: 0.9824 - val_loss: 0.1873 - val_categorical_accuracy: 0.9489

Epoch 00039: LearningRateScheduler reducing learning rate to 0.00010058028439341503.
Epoch 39/40
95/96 [============================>.] - ETA: 0s - loss: 0.0272 - categorical_accuracy: 0.9934
Epoch 00039: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0269 - categorical_accuracy: 0.9935 - val_loss: 0.1783 - val_categorical_accuracy: 0.9599

Epoch 00040: LearningRateScheduler reducing learning rate to 0.00010046422751473202.
Epoch 40/40
95/96 [============================>.] - ETA: 0s - loss: 0.0104 - categorical_accuracy: 0.9987
Epoch 00040: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 48s 496ms/step - loss: 0.0103 - categorical_accuracy: 0.9987 - val_loss: 0.2151 - val_categorical_accuracy: 0.9489
record:  0.14244348507458604 0.96350366
Start inference on test dataset.
114/114 [==============================] - 18s 155ms/step
Finitune InceptionResNetV2...
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
inception_resnet_v2 (Model)  (None, 11, 11, 1536)      54336736  
_________________________________________________________________
global_max_pooling2d (Global (None, 1536)              0         
_________________________________________________________________
dense (Dense)                (None, 4)                 6148      
=================================================================
Total params: 54,342,884
Trainable params: 54,282,340
Non-trainable params: 60,544
_________________________________________________________________
Train for 96 steps, validate for 18 steps

Epoch 00001: LearningRateScheduler reducing learning rate to 0.0001.
Epoch 1/40
95/96 [============================>.] - ETA: 0s - loss: 0.8153 - categorical_accuracy: 0.7664
Epoch 00001: val_categorical_accuracy improved from -inf to 0.86131, saving model to ../output/best_models/finetuned_InceptionResNetV2.h5
96/96 [==============================] - 59s 612ms/step - loss: 0.8103 - categorical_accuracy: 0.7676 - val_loss: 0.4900 - val_categorical_accuracy: 0.8613

Epoch 00002: LearningRateScheduler reducing learning rate to 0.00017500000000000003.
Epoch 2/40
95/96 [============================>.] - ETA: 0s - loss: 0.2632 - categorical_accuracy: 0.9191
Epoch 00002: val_categorical_accuracy improved from 0.86131 to 0.88686, saving model to ../output/best_models/finetuned_InceptionResNetV2.h5
96/96 [==============================] - 45s 472ms/step - loss: 0.2618 - categorical_accuracy: 0.9193 - val_loss: 0.4489 - val_categorical_accuracy: 0.8869

Epoch 00003: LearningRateScheduler reducing learning rate to 0.00025.
Epoch 3/40
95/96 [============================>.] - ETA: 0s - loss: 0.2644 - categorical_accuracy: 0.9270
Epoch 00003: val_categorical_accuracy did not improve from 0.88686
96/96 [==============================] - 43s 448ms/step - loss: 0.2665 - categorical_accuracy: 0.9258 - val_loss: 3.6418 - val_categorical_accuracy: 0.8102

Epoch 00004: LearningRateScheduler reducing learning rate to 0.00032500000000000004.
Epoch 4/40
95/96 [============================>.] - ETA: 0s - loss: 0.2366 - categorical_accuracy: 0.9309
Epoch 00004: val_categorical_accuracy did not improve from 0.88686
96/96 [==============================] - 43s 447ms/step - loss: 0.2346 - categorical_accuracy: 0.9316 - val_loss: 0.6892 - val_categorical_accuracy: 0.8467

Epoch 00005: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 5/40
95/96 [============================>.] - ETA: 0s - loss: 0.2991 - categorical_accuracy: 0.9257
Epoch 00005: val_categorical_accuracy improved from 0.88686 to 0.91241, saving model to ../output/best_models/finetuned_InceptionResNetV2.h5
96/96 [==============================] - 45s 473ms/step - loss: 0.3008 - categorical_accuracy: 0.9245 - val_loss: 0.5470 - val_categorical_accuracy: 0.9124

Epoch 00006: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 6/40
95/96 [============================>.] - ETA: 0s - loss: 0.2973 - categorical_accuracy: 0.9224
Epoch 00006: val_categorical_accuracy did not improve from 0.91241
96/96 [==============================] - 43s 448ms/step - loss: 0.2957 - categorical_accuracy: 0.9225 - val_loss: 23.1251 - val_categorical_accuracy: 0.7044

Epoch 00007: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 7/40
95/96 [============================>.] - ETA: 0s - loss: 0.1168 - categorical_accuracy: 0.9632
Epoch 00007: val_categorical_accuracy did not improve from 0.91241
96/96 [==============================] - 43s 447ms/step - loss: 0.1157 - categorical_accuracy: 0.9635 - val_loss: 0.5850 - val_categorical_accuracy: 0.8942

Epoch 00008: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 8/40
95/96 [============================>.] - ETA: 0s - loss: 0.0682 - categorical_accuracy: 0.9763
Epoch 00008: val_categorical_accuracy improved from 0.91241 to 0.95620, saving model to ../output/best_models/finetuned_InceptionResNetV2.h5
96/96 [==============================] - 45s 473ms/step - loss: 0.0675 - categorical_accuracy: 0.9766 - val_loss: 0.0980 - val_categorical_accuracy: 0.9562

Epoch 00009: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 9/40
95/96 [============================>.] - ETA: 0s - loss: 0.0874 - categorical_accuracy: 0.9783
Epoch 00009: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 448ms/step - loss: 0.0865 - categorical_accuracy: 0.9785 - val_loss: 0.1573 - val_categorical_accuracy: 0.9489

Epoch 00010: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 10/40
95/96 [============================>.] - ETA: 0s - loss: 0.0325 - categorical_accuracy: 0.9901
Epoch 00010: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0324 - categorical_accuracy: 0.9902 - val_loss: 0.1995 - val_categorical_accuracy: 0.9526

Epoch 00011: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 11/40
95/96 [============================>.] - ETA: 0s - loss: 0.0635 - categorical_accuracy: 0.9816
Epoch 00011: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0645 - categorical_accuracy: 0.9811 - val_loss: 2.7743 - val_categorical_accuracy: 0.0511

Epoch 00012: LearningRateScheduler reducing learning rate to 0.00034.
Epoch 12/40
95/96 [============================>.] - ETA: 0s - loss: 0.0235 - categorical_accuracy: 0.9934
Epoch 00012: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0233 - categorical_accuracy: 0.9935 - val_loss: 0.2366 - val_categorical_accuracy: 0.9380

Epoch 00013: LearningRateScheduler reducing learning rate to 0.00029200000000000005.
Epoch 13/40
95/96 [============================>.] - ETA: 0s - loss: 0.0292 - categorical_accuracy: 0.9947
Epoch 00013: val_categorical_accuracy improved from 0.95620 to 0.95985, saving model to ../output/best_models/finetuned_InceptionResNetV2.h5
96/96 [==============================] - 46s 476ms/step - loss: 0.0289 - categorical_accuracy: 0.9948 - val_loss: 0.1992 - val_categorical_accuracy: 0.9599

Epoch 00014: LearningRateScheduler reducing learning rate to 0.00025360000000000004.
Epoch 14/40
95/96 [============================>.] - ETA: 0s - loss: 0.0041 - categorical_accuracy: 0.9993
Epoch 00014: val_categorical_accuracy improved from 0.95985 to 0.96350, saving model to ../output/best_models/finetuned_InceptionResNetV2.h5
96/96 [==============================] - 45s 474ms/step - loss: 0.0041 - categorical_accuracy: 0.9993 - val_loss: 0.1905 - val_categorical_accuracy: 0.9635

Epoch 00015: LearningRateScheduler reducing learning rate to 0.00022288000000000006.
Epoch 15/40
95/96 [============================>.] - ETA: 0s - loss: 0.0055 - categorical_accuracy: 0.9980
Epoch 00015: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 43s 448ms/step - loss: 0.0055 - categorical_accuracy: 0.9980 - val_loss: 0.2028 - val_categorical_accuracy: 0.9599

Epoch 00016: LearningRateScheduler reducing learning rate to 0.00019830400000000006.
Epoch 16/40
95/96 [============================>.] - ETA: 0s - loss: 0.0069 - categorical_accuracy: 0.9987
Epoch 00016: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 43s 447ms/step - loss: 0.0068 - categorical_accuracy: 0.9987 - val_loss: 0.1791 - val_categorical_accuracy: 0.9635

Epoch 00017: LearningRateScheduler reducing learning rate to 0.00017864320000000004.
Epoch 17/40
95/96 [============================>.] - ETA: 0s - loss: 0.0062 - categorical_accuracy: 0.9980
Epoch 00017: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 43s 447ms/step - loss: 0.0062 - categorical_accuracy: 0.9980 - val_loss: 0.1625 - val_categorical_accuracy: 0.9526

Epoch 00018: LearningRateScheduler reducing learning rate to 0.00016291456000000005.
Epoch 18/40
95/96 [============================>.] - ETA: 0s - loss: 0.0038 - categorical_accuracy: 0.9993
Epoch 00018: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 43s 447ms/step - loss: 0.0038 - categorical_accuracy: 0.9993 - val_loss: 0.1502 - val_categorical_accuracy: 0.9599

Epoch 00019: LearningRateScheduler reducing learning rate to 0.00015033164800000003.
Epoch 19/40
95/96 [============================>.] - ETA: 0s - loss: 0.0195 - categorical_accuracy: 0.9974
Epoch 00019: val_categorical_accuracy did not improve from 0.96350
96/96 [==============================] - 43s 447ms/step - loss: 0.0193 - categorical_accuracy: 0.9974 - val_loss: 0.1783 - val_categorical_accuracy: 0.9562

Epoch 00020: LearningRateScheduler reducing learning rate to 0.00014026531840000004.
Epoch 20/40
95/96 [============================>.] - ETA: 0s - loss: 0.0129 - categorical_accuracy: 0.9987
Epoch 00020: val_categorical_accuracy improved from 0.96350 to 0.96715, saving model to ../output/best_models/finetuned_InceptionResNetV2.h5
96/96 [==============================] - 45s 473ms/step - loss: 0.0127 - categorical_accuracy: 0.9987 - val_loss: 0.1607 - val_categorical_accuracy: 0.9672

Epoch 00021: LearningRateScheduler reducing learning rate to 0.00013221225472000002.
Epoch 21/40
95/96 [============================>.] - ETA: 0s - loss: 0.0030 - categorical_accuracy: 0.9987
Epoch 00021: val_categorical_accuracy improved from 0.96715 to 0.97080, saving model to ../output/best_models/finetuned_InceptionResNetV2.h5
96/96 [==============================] - 45s 474ms/step - loss: 0.0030 - categorical_accuracy: 0.9987 - val_loss: 0.1510 - val_categorical_accuracy: 0.9708

Epoch 00022: LearningRateScheduler reducing learning rate to 0.00012576980377600002.
Epoch 22/40
95/96 [============================>.] - ETA: 0s - loss: 0.0068 - categorical_accuracy: 0.9993
Epoch 00022: val_categorical_accuracy did not improve from 0.97080
96/96 [==============================] - 43s 447ms/step - loss: 0.0067 - categorical_accuracy: 0.9993 - val_loss: 0.1598 - val_categorical_accuracy: 0.9672

Epoch 00023: LearningRateScheduler reducing learning rate to 0.00012061584302080001.
Epoch 23/40
95/96 [============================>.] - ETA: 0s - loss: 0.0014 - categorical_accuracy: 0.9993
Epoch 00023: val_categorical_accuracy did not improve from 0.97080
96/96 [==============================] - 43s 448ms/step - loss: 0.0014 - categorical_accuracy: 0.9993 - val_loss: 0.1607 - val_categorical_accuracy: 0.9672

Epoch 00024: LearningRateScheduler reducing learning rate to 0.00011649267441664002.
Epoch 24/40
95/96 [============================>.] - ETA: 0s - loss: 0.0073 - categorical_accuracy: 0.9974
Epoch 00024: val_categorical_accuracy did not improve from 0.97080
96/96 [==============================] - 43s 447ms/step - loss: 0.0072 - categorical_accuracy: 0.9974 - val_loss: 0.1381 - val_categorical_accuracy: 0.9708

Epoch 00025: LearningRateScheduler reducing learning rate to 0.00011319413953331202.
Epoch 25/40
95/96 [============================>.] - ETA: 0s - loss: 0.0061 - categorical_accuracy: 0.9987
Epoch 00025: val_categorical_accuracy did not improve from 0.97080
96/96 [==============================] - 43s 447ms/step - loss: 0.0061 - categorical_accuracy: 0.9987 - val_loss: 0.1201 - val_categorical_accuracy: 0.9708

Epoch 00026: LearningRateScheduler reducing learning rate to 0.00011055531162664962.
Epoch 26/40
95/96 [============================>.] - ETA: 0s - loss: 0.0130 - categorical_accuracy: 0.9967
Epoch 00026: val_categorical_accuracy did not improve from 0.97080
96/96 [==============================] - 43s 447ms/step - loss: 0.0129 - categorical_accuracy: 0.9967 - val_loss: 0.1275 - val_categorical_accuracy: 0.9672

Epoch 00027: LearningRateScheduler reducing learning rate to 0.0001084442493013197.
Epoch 27/40
95/96 [============================>.] - ETA: 0s - loss: 0.0043 - categorical_accuracy: 0.9993
Epoch 00027: val_categorical_accuracy improved from 0.97080 to 0.97445, saving model to ../output/best_models/finetuned_InceptionResNetV2.h5
96/96 [==============================] - 46s 475ms/step - loss: 0.0042 - categorical_accuracy: 0.9993 - val_loss: 0.1104 - val_categorical_accuracy: 0.9745

Epoch 00028: LearningRateScheduler reducing learning rate to 0.00010675539944105576.
Epoch 28/40
95/96 [============================>.] - ETA: 0s - loss: 0.0091 - categorical_accuracy: 0.9987
Epoch 00028: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 447ms/step - loss: 0.0091 - categorical_accuracy: 0.9987 - val_loss: 0.1258 - val_categorical_accuracy: 0.9708

Epoch 00029: LearningRateScheduler reducing learning rate to 0.0001054043195528446.
Epoch 29/40
95/96 [============================>.] - ETA: 0s - loss: 0.0067 - categorical_accuracy: 0.9987
Epoch 00029: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 447ms/step - loss: 0.0066 - categorical_accuracy: 0.9987 - val_loss: 0.1252 - val_categorical_accuracy: 0.9708

Epoch 00030: LearningRateScheduler reducing learning rate to 0.00010432345564227568.
Epoch 30/40
95/96 [============================>.] - ETA: 0s - loss: 0.0037 - categorical_accuracy: 0.9993
Epoch 00030: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 446ms/step - loss: 0.0037 - categorical_accuracy: 0.9993 - val_loss: 0.1393 - val_categorical_accuracy: 0.9708

Epoch 00031: LearningRateScheduler reducing learning rate to 0.00010345876451382055.
Epoch 31/40
95/96 [============================>.] - ETA: 0s - loss: 0.0011 - categorical_accuracy: 1.0000
Epoch 00031: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 448ms/step - loss: 0.0011 - categorical_accuracy: 1.0000 - val_loss: 0.1315 - val_categorical_accuracy: 0.9708

Epoch 00032: LearningRateScheduler reducing learning rate to 0.00010276701161105644.
Epoch 32/40
95/96 [============================>.] - ETA: 0s - loss: 0.0048 - categorical_accuracy: 0.9987
Epoch 00032: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 448ms/step - loss: 0.0048 - categorical_accuracy: 0.9987 - val_loss: 0.1656 - val_categorical_accuracy: 0.9672

Epoch 00033: LearningRateScheduler reducing learning rate to 0.00010221360928884516.
Epoch 33/40
95/96 [============================>.] - ETA: 0s - loss: 0.0082 - categorical_accuracy: 0.9980
Epoch 00033: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 448ms/step - loss: 0.0081 - categorical_accuracy: 0.9980 - val_loss: 0.1405 - val_categorical_accuracy: 0.9708

Epoch 00034: LearningRateScheduler reducing learning rate to 0.00010177088743107613.
Epoch 34/40
95/96 [============================>.] - ETA: 0s - loss: 0.0036 - categorical_accuracy: 0.9993
Epoch 00034: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 447ms/step - loss: 0.0035 - categorical_accuracy: 0.9993 - val_loss: 0.1458 - val_categorical_accuracy: 0.9672

Epoch 00035: LearningRateScheduler reducing learning rate to 0.0001014167099448609.
Epoch 35/40
95/96 [============================>.] - ETA: 0s - loss: 0.0132 - categorical_accuracy: 0.9954
Epoch 00035: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 448ms/step - loss: 0.0131 - categorical_accuracy: 0.9954 - val_loss: 0.2093 - val_categorical_accuracy: 0.9599

Epoch 00036: LearningRateScheduler reducing learning rate to 0.00010113336795588872.
Epoch 36/40
95/96 [============================>.] - ETA: 0s - loss: 0.0013 - categorical_accuracy: 1.0000
Epoch 00036: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 447ms/step - loss: 0.0013 - categorical_accuracy: 1.0000 - val_loss: 0.1534 - val_categorical_accuracy: 0.9672

Epoch 00037: LearningRateScheduler reducing learning rate to 0.00010090669436471098.
Epoch 37/40
95/96 [============================>.] - ETA: 0s - loss: 0.0044 - categorical_accuracy: 0.9987
Epoch 00037: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 447ms/step - loss: 0.0043 - categorical_accuracy: 0.9987 - val_loss: 0.1878 - val_categorical_accuracy: 0.9672

Epoch 00038: LearningRateScheduler reducing learning rate to 0.00010072535549176879.
Epoch 38/40
95/96 [============================>.] - ETA: 0s - loss: 0.0054 - categorical_accuracy: 0.9987
Epoch 00038: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 448ms/step - loss: 0.0053 - categorical_accuracy: 0.9987 - val_loss: 0.1460 - val_categorical_accuracy: 0.9562

Epoch 00039: LearningRateScheduler reducing learning rate to 0.00010058028439341503.
Epoch 39/40
95/96 [============================>.] - ETA: 0s - loss: 0.0054 - categorical_accuracy: 0.9987
Epoch 00039: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 448ms/step - loss: 0.0053 - categorical_accuracy: 0.9987 - val_loss: 0.1747 - val_categorical_accuracy: 0.9562

Epoch 00040: LearningRateScheduler reducing learning rate to 0.00010046422751473202.
Epoch 40/40
95/96 [============================>.] - ETA: 0s - loss: 0.0079 - categorical_accuracy: 0.9980
Epoch 00040: val_categorical_accuracy did not improve from 0.97445
96/96 [==============================] - 43s 447ms/step - loss: 0.0078 - categorical_accuracy: 0.9980 - val_loss: 0.1681 - val_categorical_accuracy: 0.9672
record:  0.11044486847822554 0.97445256
Start inference on test dataset.
114/114 [==============================] - 18s 160ms/step
Finitune MobileNetV2...
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
mobilenetv2_1.00_224 (Model) (None, 13, 13, 1280)      2257984   
_________________________________________________________________
global_max_pooling2d (Global (None, 1280)              0         
_________________________________________________________________
dense (Dense)                (None, 4)                 5124      
=================================================================
Total params: 2,263,108
Trainable params: 2,228,996
Non-trainable params: 34,112
_________________________________________________________________
Train for 96 steps, validate for 18 steps

Epoch 00001: LearningRateScheduler reducing learning rate to 0.0001.
Epoch 1/40
95/96 [============================>.] - ETA: 0s - loss: 1.3133 - categorical_accuracy: 0.6132
Epoch 00001: val_categorical_accuracy improved from -inf to 0.47445, saving model to ../output/best_models/finetuned_MobileNetV2.h5
96/96 [==============================] - 22s 228ms/step - loss: 1.3056 - categorical_accuracy: 0.6139 - val_loss: 2.3769 - val_categorical_accuracy: 0.4745

Epoch 00002: LearningRateScheduler reducing learning rate to 0.00017500000000000003.
Epoch 2/40
95/96 [============================>.] - ETA: 0s - loss: 0.5492 - categorical_accuracy: 0.8211
Epoch 00002: val_categorical_accuracy improved from 0.47445 to 0.75182, saving model to ../output/best_models/finetuned_MobileNetV2.h5
96/96 [==============================] - 19s 197ms/step - loss: 0.5456 - categorical_accuracy: 0.8223 - val_loss: 1.1247 - val_categorical_accuracy: 0.7518

Epoch 00003: LearningRateScheduler reducing learning rate to 0.00025.
Epoch 3/40
95/96 [============================>.] - ETA: 0s - loss: 0.3373 - categorical_accuracy: 0.8921
Epoch 00003: val_categorical_accuracy did not improve from 0.75182
96/96 [==============================] - 19s 195ms/step - loss: 0.3341 - categorical_accuracy: 0.8932 - val_loss: 1.2988 - val_categorical_accuracy: 0.7372

Epoch 00004: LearningRateScheduler reducing learning rate to 0.00032500000000000004.
Epoch 4/40
95/96 [============================>.] - ETA: 0s - loss: 0.4234 - categorical_accuracy: 0.8967
Epoch 00004: val_categorical_accuracy did not improve from 0.75182
96/96 [==============================] - 19s 194ms/step - loss: 0.4203 - categorical_accuracy: 0.8971 - val_loss: 4.2421 - val_categorical_accuracy: 0.5474

Epoch 00005: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 5/40
95/96 [============================>.] - ETA: 0s - loss: 0.5348 - categorical_accuracy: 0.8836
Epoch 00005: val_categorical_accuracy did not improve from 0.75182
96/96 [==============================] - 19s 194ms/step - loss: 0.5541 - categorical_accuracy: 0.8822 - val_loss: 2.4230 - val_categorical_accuracy: 0.7007

Epoch 00006: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 6/40
95/96 [============================>.] - ETA: 0s - loss: 0.4603 - categorical_accuracy: 0.9092
Epoch 00006: val_categorical_accuracy did not improve from 0.75182
96/96 [==============================] - 19s 194ms/step - loss: 0.4585 - categorical_accuracy: 0.9095 - val_loss: 3.2317 - val_categorical_accuracy: 0.6131

Epoch 00007: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 7/40
95/96 [============================>.] - ETA: 0s - loss: 0.3769 - categorical_accuracy: 0.9184
Epoch 00007: val_categorical_accuracy did not improve from 0.75182
96/96 [==============================] - 19s 194ms/step - loss: 0.3734 - categorical_accuracy: 0.9193 - val_loss: 2.8953 - val_categorical_accuracy: 0.5693

Epoch 00008: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 8/40
95/96 [============================>.] - ETA: 0s - loss: 0.3459 - categorical_accuracy: 0.9217
Epoch 00008: val_categorical_accuracy did not improve from 0.75182
96/96 [==============================] - 19s 194ms/step - loss: 0.3486 - categorical_accuracy: 0.9212 - val_loss: 1.6234 - val_categorical_accuracy: 0.6788

Epoch 00009: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 9/40
95/96 [============================>.] - ETA: 0s - loss: 0.4455 - categorical_accuracy: 0.9125
Epoch 00009: val_categorical_accuracy improved from 0.75182 to 0.79562, saving model to ../output/best_models/finetuned_MobileNetV2.h5
96/96 [==============================] - 19s 198ms/step - loss: 0.4410 - categorical_accuracy: 0.9134 - val_loss: 0.6377 - val_categorical_accuracy: 0.7956

Epoch 00010: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 10/40
95/96 [============================>.] - ETA: 0s - loss: 0.3334 - categorical_accuracy: 0.9329
Epoch 00010: val_categorical_accuracy did not improve from 0.79562
96/96 [==============================] - 19s 194ms/step - loss: 0.3315 - categorical_accuracy: 0.9323 - val_loss: 1.5448 - val_categorical_accuracy: 0.6971

Epoch 00011: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 11/40
95/96 [============================>.] - ETA: 0s - loss: 0.3215 - categorical_accuracy: 0.9336
Epoch 00011: val_categorical_accuracy did not improve from 0.79562
96/96 [==============================] - 19s 195ms/step - loss: 0.3284 - categorical_accuracy: 0.9329 - val_loss: 1.6709 - val_categorical_accuracy: 0.7372

Epoch 00012: LearningRateScheduler reducing learning rate to 0.00034.
Epoch 12/40
95/96 [============================>.] - ETA: 0s - loss: 0.2321 - categorical_accuracy: 0.9520
Epoch 00012: val_categorical_accuracy improved from 0.79562 to 0.88686, saving model to ../output/best_models/finetuned_MobileNetV2.h5
96/96 [==============================] - 19s 197ms/step - loss: 0.2298 - categorical_accuracy: 0.9525 - val_loss: 0.7031 - val_categorical_accuracy: 0.8869

Epoch 00013: LearningRateScheduler reducing learning rate to 0.00029200000000000005.
Epoch 13/40
95/96 [============================>.] - ETA: 0s - loss: 0.1852 - categorical_accuracy: 0.9632
Epoch 00013: val_categorical_accuracy did not improve from 0.88686
96/96 [==============================] - 19s 194ms/step - loss: 0.1833 - categorical_accuracy: 0.9635 - val_loss: 0.7740 - val_categorical_accuracy: 0.8467

Epoch 00014: LearningRateScheduler reducing learning rate to 0.00025360000000000004.
Epoch 14/40
95/96 [============================>.] - ETA: 0s - loss: 0.1358 - categorical_accuracy: 0.9632
Epoch 00014: val_categorical_accuracy did not improve from 0.88686
96/96 [==============================] - 19s 195ms/step - loss: 0.1344 - categorical_accuracy: 0.9635 - val_loss: 0.4925 - val_categorical_accuracy: 0.8832

Epoch 00015: LearningRateScheduler reducing learning rate to 0.00022288000000000006.
Epoch 15/40
95/96 [============================>.] - ETA: 0s - loss: 0.0657 - categorical_accuracy: 0.9855
Epoch 00015: val_categorical_accuracy improved from 0.88686 to 0.93066, saving model to ../output/best_models/finetuned_MobileNetV2.h5
96/96 [==============================] - 19s 198ms/step - loss: 0.0652 - categorical_accuracy: 0.9857 - val_loss: 0.3694 - val_categorical_accuracy: 0.9307

Epoch 00016: LearningRateScheduler reducing learning rate to 0.00019830400000000006.
Epoch 16/40
95/96 [============================>.] - ETA: 0s - loss: 0.0556 - categorical_accuracy: 0.9809
Epoch 00016: val_categorical_accuracy improved from 0.93066 to 0.94526, saving model to ../output/best_models/finetuned_MobileNetV2.h5
96/96 [==============================] - 19s 197ms/step - loss: 0.0552 - categorical_accuracy: 0.9811 - val_loss: 0.2737 - val_categorical_accuracy: 0.9453

Epoch 00017: LearningRateScheduler reducing learning rate to 0.00017864320000000004.
Epoch 17/40
95/96 [============================>.] - ETA: 0s - loss: 0.0799 - categorical_accuracy: 0.9789
Epoch 00017: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 19s 194ms/step - loss: 0.0791 - categorical_accuracy: 0.9792 - val_loss: 0.3695 - val_categorical_accuracy: 0.9234

Epoch 00018: LearningRateScheduler reducing learning rate to 0.00016291456000000005.
Epoch 18/40
95/96 [============================>.] - ETA: 0s - loss: 0.0375 - categorical_accuracy: 0.9882
Epoch 00018: val_categorical_accuracy improved from 0.94526 to 0.95255, saving model to ../output/best_models/finetuned_MobileNetV2.h5
96/96 [==============================] - 19s 197ms/step - loss: 0.0383 - categorical_accuracy: 0.9883 - val_loss: 0.2206 - val_categorical_accuracy: 0.9526

Epoch 00019: LearningRateScheduler reducing learning rate to 0.00015033164800000003.
Epoch 19/40
95/96 [============================>.] - ETA: 0s - loss: 0.0137 - categorical_accuracy: 0.9967
Epoch 00019: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0135 - categorical_accuracy: 0.9967 - val_loss: 0.2425 - val_categorical_accuracy: 0.9526

Epoch 00020: LearningRateScheduler reducing learning rate to 0.00014026531840000004.
Epoch 20/40
95/96 [============================>.] - ETA: 0s - loss: 0.0181 - categorical_accuracy: 0.9934
Epoch 00020: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0179 - categorical_accuracy: 0.9935 - val_loss: 0.2428 - val_categorical_accuracy: 0.9380

Epoch 00021: LearningRateScheduler reducing learning rate to 0.00013221225472000002.
Epoch 21/40
95/96 [============================>.] - ETA: 0s - loss: 0.0387 - categorical_accuracy: 0.9868
Epoch 00021: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0384 - categorical_accuracy: 0.9870 - val_loss: 0.2731 - val_categorical_accuracy: 0.9343

Epoch 00022: LearningRateScheduler reducing learning rate to 0.00012576980377600002.
Epoch 22/40
95/96 [============================>.] - ETA: 0s - loss: 0.0201 - categorical_accuracy: 0.9947
Epoch 00022: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0199 - categorical_accuracy: 0.9948 - val_loss: 0.2800 - val_categorical_accuracy: 0.9161

Epoch 00023: LearningRateScheduler reducing learning rate to 0.00012061584302080001.
Epoch 23/40
95/96 [============================>.] - ETA: 0s - loss: 0.0216 - categorical_accuracy: 0.9928
Epoch 00023: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0214 - categorical_accuracy: 0.9928 - val_loss: 0.2039 - val_categorical_accuracy: 0.9380

Epoch 00024: LearningRateScheduler reducing learning rate to 0.00011649267441664002.
Epoch 24/40
95/96 [============================>.] - ETA: 0s - loss: 0.0163 - categorical_accuracy: 0.9947
Epoch 00024: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0161 - categorical_accuracy: 0.9948 - val_loss: 0.2204 - val_categorical_accuracy: 0.9489

Epoch 00025: LearningRateScheduler reducing learning rate to 0.00011319413953331202.
Epoch 25/40
95/96 [============================>.] - ETA: 0s - loss: 0.0233 - categorical_accuracy: 0.9947
Epoch 00025: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 195ms/step - loss: 0.0231 - categorical_accuracy: 0.9948 - val_loss: 0.2215 - val_categorical_accuracy: 0.9526

Epoch 00026: LearningRateScheduler reducing learning rate to 0.00011055531162664962.
Epoch 26/40
95/96 [============================>.] - ETA: 0s - loss: 0.0169 - categorical_accuracy: 0.9961
Epoch 00026: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0167 - categorical_accuracy: 0.9961 - val_loss: 0.2491 - val_categorical_accuracy: 0.9453

Epoch 00027: LearningRateScheduler reducing learning rate to 0.0001084442493013197.
Epoch 27/40
95/96 [============================>.] - ETA: 0s - loss: 0.0181 - categorical_accuracy: 0.9954
Epoch 00027: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0179 - categorical_accuracy: 0.9954 - val_loss: 0.2600 - val_categorical_accuracy: 0.9453

Epoch 00028: LearningRateScheduler reducing learning rate to 0.00010675539944105576.
Epoch 28/40
95/96 [============================>.] - ETA: 0s - loss: 0.0199 - categorical_accuracy: 0.9934
Epoch 00028: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 195ms/step - loss: 0.0197 - categorical_accuracy: 0.9935 - val_loss: 0.2440 - val_categorical_accuracy: 0.9489

Epoch 00029: LearningRateScheduler reducing learning rate to 0.0001054043195528446.
Epoch 29/40
95/96 [============================>.] - ETA: 0s - loss: 0.0125 - categorical_accuracy: 0.9974
Epoch 00029: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0124 - categorical_accuracy: 0.9974 - val_loss: 0.2596 - val_categorical_accuracy: 0.9416

Epoch 00030: LearningRateScheduler reducing learning rate to 0.00010432345564227568.
Epoch 30/40
95/96 [============================>.] - ETA: 0s - loss: 0.0174 - categorical_accuracy: 0.9961
Epoch 00030: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 195ms/step - loss: 0.0172 - categorical_accuracy: 0.9961 - val_loss: 0.2519 - val_categorical_accuracy: 0.9489

Epoch 00031: LearningRateScheduler reducing learning rate to 0.00010345876451382055.
Epoch 31/40
95/96 [============================>.] - ETA: 0s - loss: 0.0085 - categorical_accuracy: 0.9961
Epoch 00031: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 195ms/step - loss: 0.0084 - categorical_accuracy: 0.9961 - val_loss: 0.2798 - val_categorical_accuracy: 0.9416

Epoch 00032: LearningRateScheduler reducing learning rate to 0.00010276701161105644.
Epoch 32/40
95/96 [============================>.] - ETA: 0s - loss: 0.0286 - categorical_accuracy: 0.9934
Epoch 00032: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0283 - categorical_accuracy: 0.9935 - val_loss: 0.2613 - val_categorical_accuracy: 0.9307

Epoch 00033: LearningRateScheduler reducing learning rate to 0.00010221360928884516.
Epoch 33/40
95/96 [============================>.] - ETA: 0s - loss: 0.0167 - categorical_accuracy: 0.9947
Epoch 00033: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0165 - categorical_accuracy: 0.9948 - val_loss: 0.3057 - val_categorical_accuracy: 0.9270

Epoch 00034: LearningRateScheduler reducing learning rate to 0.00010177088743107613.
Epoch 34/40
95/96 [============================>.] - ETA: 0s - loss: 0.0174 - categorical_accuracy: 0.9967
Epoch 00034: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 195ms/step - loss: 0.0172 - categorical_accuracy: 0.9967 - val_loss: 0.3561 - val_categorical_accuracy: 0.9380

Epoch 00035: LearningRateScheduler reducing learning rate to 0.0001014167099448609.
Epoch 35/40
95/96 [============================>.] - ETA: 0s - loss: 0.0239 - categorical_accuracy: 0.9934
Epoch 00035: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0236 - categorical_accuracy: 0.9935 - val_loss: 0.3174 - val_categorical_accuracy: 0.9380

Epoch 00036: LearningRateScheduler reducing learning rate to 0.00010113336795588872.
Epoch 36/40
95/96 [============================>.] - ETA: 0s - loss: 0.0138 - categorical_accuracy: 0.9961
Epoch 00036: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 195ms/step - loss: 0.0137 - categorical_accuracy: 0.9961 - val_loss: 0.2959 - val_categorical_accuracy: 0.9526

Epoch 00037: LearningRateScheduler reducing learning rate to 0.00010090669436471098.
Epoch 37/40
95/96 [============================>.] - ETA: 0s - loss: 0.0096 - categorical_accuracy: 0.9980
Epoch 00037: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 19s 194ms/step - loss: 0.0095 - categorical_accuracy: 0.9980 - val_loss: 0.2943 - val_categorical_accuracy: 0.9526

Epoch 00038: LearningRateScheduler reducing learning rate to 0.00010072535549176879.
Epoch 38/40
95/96 [============================>.] - ETA: 0s - loss: 0.0084 - categorical_accuracy: 0.9974
Epoch 00038: val_categorical_accuracy improved from 0.95255 to 0.95620, saving model to ../output/best_models/finetuned_MobileNetV2.h5
96/96 [==============================] - 19s 197ms/step - loss: 0.0084 - categorical_accuracy: 0.9974 - val_loss: 0.2335 - val_categorical_accuracy: 0.9562

Epoch 00039: LearningRateScheduler reducing learning rate to 0.00010058028439341503.
Epoch 39/40
95/96 [============================>.] - ETA: 0s - loss: 0.0131 - categorical_accuracy: 0.9961
Epoch 00039: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 19s 194ms/step - loss: 0.0130 - categorical_accuracy: 0.9961 - val_loss: 0.3050 - val_categorical_accuracy: 0.9416

Epoch 00040: LearningRateScheduler reducing learning rate to 0.00010046422751473202.
Epoch 40/40
95/96 [============================>.] - ETA: 0s - loss: 0.0093 - categorical_accuracy: 0.9974
Epoch 00040: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 19s 194ms/step - loss: 0.0092 - categorical_accuracy: 0.9974 - val_loss: 0.2917 - val_categorical_accuracy: 0.9416
record:  0.2335375535638175 0.95620435
Start inference on test dataset.
114/114 [==============================] - 15s 129ms/step
In [19]:
report_df =  pd.DataFrame(record_ls)

with pd.option_context('display.max_rows', None, 'display.max_columns', None): 
    display(report_df)
    
report_df.to_csv(os.path.join('../output', f'fintune_cnn_report.csv'), index=False)
model train_loss valid_loss train_acc valid_acc
0 finetuned_ResNet101V2 0.028283 0.119871 0.992839 0.974453
1 finetuned_VGG16 0.014196 0.142443 0.995443 0.963504
2 finetuned_InceptionResNetV2 0.004220 0.110445 0.999349 0.974453
3 finetuned_MobileNetV2 0.008391 0.233538 0.997396 0.956204

Compelling result is derived by end-to-end finetuning CNN. Can we do better by preprocessing input images?

Retry finetuning CNNs after backgroud removal

The background may distract disease detection and classification, so let's try if backgroud removal helps. Following victorlouisdg's kernel, I use grabcut to do this job.

In [37]:
from mpl_toolkits.axes_grid1 import ImageGrid
In [25]:
bg_removed_img_dir = '../input/images_no_bg'
os.makedirs(bg_removed_img_dir)

preprocessing images using grabcut

In [27]:
def init_grabcut_mask(h, w):
    mask = np.ones((h, w), np.uint8) * cv2.GC_PR_BGD
    mask[h//4:3*h//4, w//4:3*w//4] = cv2.GC_PR_FGD
    mask[2*h//5:3*h//5, 2*w//5:3*w//5] = cv2.GC_FGD
    return mask

plt.imshow(init_grabcut_mask(3*136, 3*205))
Out[27]:
<matplotlib.image.AxesImage at 0x7fa3ec1174a8>
In [32]:
def remove_background(image):
    h, w = image.shape[:2]
    mask = init_grabcut_mask(h, w)
    bgm = np.zeros((1, 65), np.float64)
    fgm = np.zeros((1, 65), np.float64)
    cv2.grabCut(image, mask, None, bgm, fgm, 1, cv2.GC_INIT_WITH_MASK)
    mask_binary = np.where((mask == 2) | (mask == 0), 0, 1).astype('uint8')
    result = cv2.bitwise_and(image, image, mask = mask_binary)
#     add_contours(result, mask_binary) # optional, adds visualizations
    return result
In [42]:
# visualize samples

num_show = 5

rows, cols = (num_show, 2)
axes_pad = 0.2
fig_h = 4.0 * rows + axes_pad * (rows-1) 
fig_w = 4.0 * cols + axes_pad * (cols-1) 
fig = plt.figure(figsize=(fig_w, fig_h))
grid = ImageGrid(fig, 111, nrows_ncols=(rows, cols), axes_pad=0.2)   

for i, ax in enumerate(grid):
    img_path = trainval_paths[i//2]
    img = cv2.resize(cv2.imread(img_path), (IMAGE_SIZE, IMAGE_SIZE))
    if i % 2 == 1:
        img = remove_background(img)
    ax.imshow(img[:, :, ::-1])    
In [43]:
for img_path in tqdm(trainval_paths):
    img = cv2.resize(cv2.imread(img_path), (IMAGE_SIZE, IMAGE_SIZE))
    nobg = remove_background(img)
    cv2.imwrite(os.path.join(bg_removed_img_dir,
                             os.path.basename(img_path)),
               nobg)
100%|██████████| 1821/1821 [10:16<00:00,  2.95it/s]
In [44]:
for img_path in tqdm(test_paths):
    img = cv2.resize(cv2.imread(img_path), (IMAGE_SIZE, IMAGE_SIZE))
    nobg = remove_background(img)
    cv2.imwrite(os.path.join(bg_removed_img_dir,
                             os.path.basename(img_path)),
               nobg)
100%|██████████| 1821/1821 [12:54<00:00,  2.35it/s]

prepare preprocessed dataset

In [46]:
def format_path_nobg(st):
    return os.path.join(bg_removed_img_dir, st + '.jpg')

test_paths_new = test_data.image_id.apply(format_path_nobg).values
trainval_paths_new = train_data.image_id.apply(format_path_nobg).values

trainval_labels_new = np.float32(train_data.loc[:, 'healthy':'scab'].values)

train_paths_new, valid_paths_new, train_labels_new, valid_labels_new =\
train_test_split(trainval_paths_new,
                 trainval_labels_new,
                 test_size=VALIDATION_SIZE,
                 random_state=SEED)

print('train samples: ', len(train_paths_new))
print('valid samples: ', len(valid_paths_new))
print('test samples: ', len(test_paths_new))
print('path example: ', train_paths_new[0])
print('label example: ',  train_labels_new[0])
train samples:  1547
valid samples:  274
test samples:  1821
path example:  ../input/images_no_bg/Train_96.jpg
label example:  [0. 0. 1. 0.]
In [47]:
train_dataset_new = (
tf.data.Dataset
    .from_tensor_slices((train_paths_new, train_labels_new))
    .map(decode_image, num_parallel_calls=AUTO)
    .cache()
    .map(data_augment, num_parallel_calls=AUTO)
    .repeat()
    .shuffle(512)
    .batch(BATCH_SIZE)
    .prefetch(AUTO)
)

valid_dataset_new = (
    tf.data.Dataset
    .from_tensor_slices((valid_paths_new, valid_labels_new))
    .map(decode_image, num_parallel_calls=AUTO)
    .batch(BATCH_SIZE)
    .cache()
    .prefetch(AUTO)
)

test_dataset_new = (
    tf.data.Dataset
    .from_tensor_slices(test_paths_new)
    .map(decode_image, num_parallel_calls=AUTO)
    .map(data_augment, num_parallel_calls=AUTO)
    .batch(BATCH_SIZE)
)

run training

In [48]:
record_ls = []

for cnn in ['ResNet101V2', 'VGG16', 'InceptionResNetV2', 'MobileNetV2']:
    print(f'Finitune {cnn}...')
    record = OrderedDict()
    
    with strategy.scope():
        # build model
        backbone = get_backbone(cnn)
        model =  tf.keras.Sequential([
            backbone,
            L.GlobalMaxPooling2D(),
#             L.Dropout(0.3),
            L.Dense(4, activation='softmax')
            ])
        model.compile(
            optimizer = 'adam',
            loss = 'categorical_crossentropy',
            metrics=['categorical_accuracy']
            )
        model.summary()
        
        
        ckpt_path = os.path.join(ckpt_dir,
                                 f'finetuned_{cnn}_nobg.h5')
        checkpoint = tf.keras.callbacks.ModelCheckpoint(
            ckpt_path,
            verbose=1,
            monitor='val_categorical_accuracy',
            save_best_only=True,
            mode='auto') 

        STEPS_PER_EPOCH = train_labels.shape[0] // BATCH_SIZE
        history = model.fit(
            train_dataset_new, 
            epochs=EPOCHS, 
            callbacks=[lr_callback, checkpoint],
            steps_per_epoch=STEPS_PER_EPOCH,
            validation_data=valid_dataset_new
            )
        
        # display training curves
        display_training_curves(
            history.history['loss'], 
            history.history['val_loss'], 
            'loss', 211)
        display_training_curves(
            history.history['categorical_accuracy'], 
            history.history['val_categorical_accuracy'], 
            'accuracy', 212)
        plt.show()
        
        record['model'] = f'finetuned_{cnn}_nobg'
        best_idx = np.argmax(history.history['val_categorical_accuracy'])
        record['train_loss'] = history.history['loss'][best_idx]
        record['valid_loss'] = history.history['val_loss'][best_idx]
        record['train_acc'] = history.history['categorical_accuracy'][best_idx]
        record['valid_acc'] = history.history['val_categorical_accuracy'][best_idx]
        record_ls.append(record)
        
        # run testing with best model weights
        model.load_weights(ckpt_path)
        
        print('record: ', record['valid_loss'], record['valid_acc'])
#         val_loss, val_acc = model.evaluate(valid_dataset)
#         print('confirmation: ', val_loss, val_acc)
        
        print('Start inference on test dataset.')
        probs = model.predict(test_dataset_new, verbose=1)
        sub.loc[:, 'healthy':] = probs
        sub.to_csv(os.path.join(submission_dir,
                                f'finetune_{cnn}_nobg.csv'),
                   index=False)
#         sub.head()
    
    # release memory
    # https://forums.fast.ai/t/how-could-i-release-gpu-memory-of-keras/2023/19
    del model
    K.clear_session()
    gc.collect()
Finitune ResNet101V2...
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
resnet101v2 (Model)          (None, 13, 13, 2048)      42626560  
_________________________________________________________________
global_max_pooling2d (Global (None, 2048)              0         
_________________________________________________________________
dense (Dense)                (None, 4)                 8196      
=================================================================
Total params: 42,634,756
Trainable params: 42,537,092
Non-trainable params: 97,664
_________________________________________________________________
Train for 96 steps, validate for 18 steps

Epoch 00001: LearningRateScheduler reducing learning rate to 0.0001.
Epoch 1/40
95/96 [============================>.] - ETA: 0s - loss: 2.8309 - categorical_accuracy: 0.6836
Epoch 00001: val_categorical_accuracy improved from -inf to 0.77737, saving model to ../output/best_models/finetuned_ResNet101V2_nobg.h5
96/96 [==============================] - 44s 462ms/step - loss: 2.8125 - categorical_accuracy: 0.6849 - val_loss: 2.2658 - val_categorical_accuracy: 0.7774

Epoch 00002: LearningRateScheduler reducing learning rate to 0.00017500000000000003.
Epoch 2/40
95/96 [============================>.] - ETA: 0s - loss: 2.1160 - categorical_accuracy: 0.7783
Epoch 00002: val_categorical_accuracy did not improve from 0.77737
96/96 [==============================] - 36s 377ms/step - loss: 2.1126 - categorical_accuracy: 0.7786 - val_loss: 5.6897 - val_categorical_accuracy: 0.4745

Epoch 00003: LearningRateScheduler reducing learning rate to 0.00025.
Epoch 3/40
95/96 [============================>.] - ETA: 0s - loss: 1.5308 - categorical_accuracy: 0.8158
Epoch 00003: val_categorical_accuracy improved from 0.77737 to 0.81387, saving model to ../output/best_models/finetuned_ResNet101V2_nobg.h5
96/96 [==============================] - 38s 399ms/step - loss: 1.5285 - categorical_accuracy: 0.8151 - val_loss: 1.8293 - val_categorical_accuracy: 0.8139

Epoch 00004: LearningRateScheduler reducing learning rate to 0.00032500000000000004.
Epoch 4/40
95/96 [============================>.] - ETA: 0s - loss: 0.8690 - categorical_accuracy: 0.8375
Epoch 00004: val_categorical_accuracy did not improve from 0.81387
96/96 [==============================] - 37s 381ms/step - loss: 0.8638 - categorical_accuracy: 0.8379 - val_loss: 1.1886 - val_categorical_accuracy: 0.8066

Epoch 00005: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 5/40
95/96 [============================>.] - ETA: 0s - loss: 0.4055 - categorical_accuracy: 0.8954
Epoch 00005: val_categorical_accuracy improved from 0.81387 to 0.90876, saving model to ../output/best_models/finetuned_ResNet101V2_nobg.h5
96/96 [==============================] - 38s 400ms/step - loss: 0.4013 - categorical_accuracy: 0.8965 - val_loss: 0.4392 - val_categorical_accuracy: 0.9088

Epoch 00006: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 6/40
95/96 [============================>.] - ETA: 0s - loss: 0.2653 - categorical_accuracy: 0.9283
Epoch 00006: val_categorical_accuracy improved from 0.90876 to 0.91606, saving model to ../output/best_models/finetuned_ResNet101V2_nobg.h5
96/96 [==============================] - 39s 402ms/step - loss: 0.2649 - categorical_accuracy: 0.9277 - val_loss: 0.3025 - val_categorical_accuracy: 0.9161

Epoch 00007: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 7/40
95/96 [============================>.] - ETA: 0s - loss: 0.2644 - categorical_accuracy: 0.9276
Epoch 00007: val_categorical_accuracy did not improve from 0.91606
96/96 [==============================] - 37s 383ms/step - loss: 0.2638 - categorical_accuracy: 0.9277 - val_loss: 0.2572 - val_categorical_accuracy: 0.9124

Epoch 00008: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 8/40
95/96 [============================>.] - ETA: 0s - loss: 0.1833 - categorical_accuracy: 0.9447
Epoch 00008: val_categorical_accuracy improved from 0.91606 to 0.93796, saving model to ../output/best_models/finetuned_ResNet101V2_nobg.h5
96/96 [==============================] - 39s 402ms/step - loss: 0.1820 - categorical_accuracy: 0.9447 - val_loss: 0.2986 - val_categorical_accuracy: 0.9380

Epoch 00009: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 9/40
95/96 [============================>.] - ETA: 0s - loss: 0.1988 - categorical_accuracy: 0.9434
Epoch 00009: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 37s 383ms/step - loss: 0.1998 - categorical_accuracy: 0.9434 - val_loss: 0.4201 - val_categorical_accuracy: 0.8504

Epoch 00010: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 10/40
95/96 [============================>.] - ETA: 0s - loss: 0.1265 - categorical_accuracy: 0.9625
Epoch 00010: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 37s 383ms/step - loss: 0.1256 - categorical_accuracy: 0.9629 - val_loss: 0.2653 - val_categorical_accuracy: 0.9307

Epoch 00011: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 11/40
95/96 [============================>.] - ETA: 0s - loss: 0.1147 - categorical_accuracy: 0.9671
Epoch 00011: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 37s 384ms/step - loss: 0.1136 - categorical_accuracy: 0.9674 - val_loss: 0.2339 - val_categorical_accuracy: 0.9307

Epoch 00012: LearningRateScheduler reducing learning rate to 0.00034.
Epoch 12/40
95/96 [============================>.] - ETA: 0s - loss: 0.1244 - categorical_accuracy: 0.9579
Epoch 00012: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 37s 383ms/step - loss: 0.1246 - categorical_accuracy: 0.9577 - val_loss: 0.3838 - val_categorical_accuracy: 0.8832

Epoch 00013: LearningRateScheduler reducing learning rate to 0.00029200000000000005.
Epoch 13/40
95/96 [============================>.] - ETA: 0s - loss: 0.0865 - categorical_accuracy: 0.9717
Epoch 00013: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 37s 384ms/step - loss: 0.0861 - categorical_accuracy: 0.9714 - val_loss: 0.2667 - val_categorical_accuracy: 0.9197

Epoch 00014: LearningRateScheduler reducing learning rate to 0.00025360000000000004.
Epoch 14/40
95/96 [============================>.] - ETA: 0s - loss: 0.0579 - categorical_accuracy: 0.9836
Epoch 00014: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 37s 384ms/step - loss: 0.0573 - categorical_accuracy: 0.9837 - val_loss: 0.2505 - val_categorical_accuracy: 0.9380

Epoch 00015: LearningRateScheduler reducing learning rate to 0.00022288000000000006.
Epoch 15/40
95/96 [============================>.] - ETA: 0s - loss: 0.0665 - categorical_accuracy: 0.9836
Epoch 00015: val_categorical_accuracy improved from 0.93796 to 0.94161, saving model to ../output/best_models/finetuned_ResNet101V2_nobg.h5
96/96 [==============================] - 39s 402ms/step - loss: 0.0658 - categorical_accuracy: 0.9837 - val_loss: 0.2268 - val_categorical_accuracy: 0.9416

Epoch 00016: LearningRateScheduler reducing learning rate to 0.00019830400000000006.
Epoch 16/40
95/96 [============================>.] - ETA: 0s - loss: 0.0451 - categorical_accuracy: 0.9868
Epoch 00016: val_categorical_accuracy improved from 0.94161 to 0.94526, saving model to ../output/best_models/finetuned_ResNet101V2_nobg.h5
96/96 [==============================] - 39s 403ms/step - loss: 0.0446 - categorical_accuracy: 0.9870 - val_loss: 0.2184 - val_categorical_accuracy: 0.9453

Epoch 00017: LearningRateScheduler reducing learning rate to 0.00017864320000000004.
Epoch 17/40
95/96 [============================>.] - ETA: 0s - loss: 0.0220 - categorical_accuracy: 0.9961
Epoch 00017: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 37s 383ms/step - loss: 0.0218 - categorical_accuracy: 0.9961 - val_loss: 0.2217 - val_categorical_accuracy: 0.9453

Epoch 00018: LearningRateScheduler reducing learning rate to 0.00016291456000000005.
Epoch 18/40
95/96 [============================>.] - ETA: 0s - loss: 0.0216 - categorical_accuracy: 0.9934
Epoch 00018: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 37s 383ms/step - loss: 0.0216 - categorical_accuracy: 0.9935 - val_loss: 0.2548 - val_categorical_accuracy: 0.9307

Epoch 00019: LearningRateScheduler reducing learning rate to 0.00015033164800000003.
Epoch 19/40
95/96 [============================>.] - ETA: 0s - loss: 0.0090 - categorical_accuracy: 0.9987
Epoch 00019: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 37s 384ms/step - loss: 0.0091 - categorical_accuracy: 0.9987 - val_loss: 0.2997 - val_categorical_accuracy: 0.9416

Epoch 00020: LearningRateScheduler reducing learning rate to 0.00014026531840000004.
Epoch 20/40
95/96 [============================>.] - ETA: 0s - loss: 0.0156 - categorical_accuracy: 0.9947
Epoch 00020: val_categorical_accuracy improved from 0.94526 to 0.95255, saving model to ../output/best_models/finetuned_ResNet101V2_nobg.h5
96/96 [==============================] - 39s 403ms/step - loss: 0.0155 - categorical_accuracy: 0.9948 - val_loss: 0.2463 - val_categorical_accuracy: 0.9526

Epoch 00021: LearningRateScheduler reducing learning rate to 0.00013221225472000002.
Epoch 21/40
95/96 [============================>.] - ETA: 0s - loss: 0.0304 - categorical_accuracy: 0.9921
Epoch 00021: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 385ms/step - loss: 0.0302 - categorical_accuracy: 0.9922 - val_loss: 0.2475 - val_categorical_accuracy: 0.9453

Epoch 00022: LearningRateScheduler reducing learning rate to 0.00012576980377600002.
Epoch 22/40
95/96 [============================>.] - ETA: 0s - loss: 0.0079 - categorical_accuracy: 0.9974
Epoch 00022: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0078 - categorical_accuracy: 0.9974 - val_loss: 0.2847 - val_categorical_accuracy: 0.9416

Epoch 00023: LearningRateScheduler reducing learning rate to 0.00012061584302080001.
Epoch 23/40
95/96 [============================>.] - ETA: 0s - loss: 0.0085 - categorical_accuracy: 0.9987
Epoch 00023: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0084 - categorical_accuracy: 0.9987 - val_loss: 0.2661 - val_categorical_accuracy: 0.9453

Epoch 00024: LearningRateScheduler reducing learning rate to 0.00011649267441664002.
Epoch 24/40
95/96 [============================>.] - ETA: 0s - loss: 0.0026 - categorical_accuracy: 1.0000
Epoch 00024: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0025 - categorical_accuracy: 1.0000 - val_loss: 0.2432 - val_categorical_accuracy: 0.9526

Epoch 00025: LearningRateScheduler reducing learning rate to 0.00011319413953331202.
Epoch 25/40
95/96 [============================>.] - ETA: 0s - loss: 0.0195 - categorical_accuracy: 0.9980
Epoch 00025: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0194 - categorical_accuracy: 0.9980 - val_loss: 0.2602 - val_categorical_accuracy: 0.9343

Epoch 00026: LearningRateScheduler reducing learning rate to 0.00011055531162664962.
Epoch 26/40
95/96 [============================>.] - ETA: 0s - loss: 0.0166 - categorical_accuracy: 0.9980
Epoch 00026: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0165 - categorical_accuracy: 0.9980 - val_loss: 0.2673 - val_categorical_accuracy: 0.9380

Epoch 00027: LearningRateScheduler reducing learning rate to 0.0001084442493013197.
Epoch 27/40
95/96 [============================>.] - ETA: 0s - loss: 0.0110 - categorical_accuracy: 0.9974
Epoch 00027: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0109 - categorical_accuracy: 0.9974 - val_loss: 0.2431 - val_categorical_accuracy: 0.9526

Epoch 00028: LearningRateScheduler reducing learning rate to 0.00010675539944105576.
Epoch 28/40
95/96 [============================>.] - ETA: 0s - loss: 0.0076 - categorical_accuracy: 0.9980
Epoch 00028: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0075 - categorical_accuracy: 0.9980 - val_loss: 0.2408 - val_categorical_accuracy: 0.9453

Epoch 00029: LearningRateScheduler reducing learning rate to 0.0001054043195528446.
Epoch 29/40
95/96 [============================>.] - ETA: 0s - loss: 0.0104 - categorical_accuracy: 0.9974
Epoch 00029: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0103 - categorical_accuracy: 0.9974 - val_loss: 0.2253 - val_categorical_accuracy: 0.9489

Epoch 00030: LearningRateScheduler reducing learning rate to 0.00010432345564227568.
Epoch 30/40
95/96 [============================>.] - ETA: 0s - loss: 0.0125 - categorical_accuracy: 0.9974
Epoch 00030: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0124 - categorical_accuracy: 0.9974 - val_loss: 0.3027 - val_categorical_accuracy: 0.9380

Epoch 00031: LearningRateScheduler reducing learning rate to 0.00010345876451382055.
Epoch 31/40
95/96 [============================>.] - ETA: 0s - loss: 0.0139 - categorical_accuracy: 0.9974
Epoch 00031: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0137 - categorical_accuracy: 0.9974 - val_loss: 0.2816 - val_categorical_accuracy: 0.9453

Epoch 00032: LearningRateScheduler reducing learning rate to 0.00010276701161105644.
Epoch 32/40
95/96 [============================>.] - ETA: 0s - loss: 0.0108 - categorical_accuracy: 0.9961
Epoch 00032: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 383ms/step - loss: 0.0107 - categorical_accuracy: 0.9961 - val_loss: 0.2038 - val_categorical_accuracy: 0.9489

Epoch 00033: LearningRateScheduler reducing learning rate to 0.00010221360928884516.
Epoch 33/40
95/96 [============================>.] - ETA: 0s - loss: 0.0064 - categorical_accuracy: 0.9993
Epoch 00033: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0063 - categorical_accuracy: 0.9993 - val_loss: 0.2398 - val_categorical_accuracy: 0.9416

Epoch 00034: LearningRateScheduler reducing learning rate to 0.00010177088743107613.
Epoch 34/40
95/96 [============================>.] - ETA: 0s - loss: 0.0053 - categorical_accuracy: 0.9987
Epoch 00034: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0053 - categorical_accuracy: 0.9987 - val_loss: 0.2751 - val_categorical_accuracy: 0.9307

Epoch 00035: LearningRateScheduler reducing learning rate to 0.0001014167099448609.
Epoch 35/40
95/96 [============================>.] - ETA: 0s - loss: 0.0066 - categorical_accuracy: 0.9993
Epoch 00035: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 382ms/step - loss: 0.0065 - categorical_accuracy: 0.9993 - val_loss: 0.2785 - val_categorical_accuracy: 0.9416

Epoch 00036: LearningRateScheduler reducing learning rate to 0.00010113336795588872.
Epoch 36/40
95/96 [============================>.] - ETA: 0s - loss: 0.0110 - categorical_accuracy: 0.9980
Epoch 00036: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0109 - categorical_accuracy: 0.9980 - val_loss: 0.2683 - val_categorical_accuracy: 0.9489

Epoch 00037: LearningRateScheduler reducing learning rate to 0.00010090669436471098.
Epoch 37/40
95/96 [============================>.] - ETA: 0s - loss: 0.0175 - categorical_accuracy: 0.9961
Epoch 00037: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0174 - categorical_accuracy: 0.9961 - val_loss: 0.2752 - val_categorical_accuracy: 0.9380

Epoch 00038: LearningRateScheduler reducing learning rate to 0.00010072535549176879.
Epoch 38/40
95/96 [============================>.] - ETA: 0s - loss: 0.0019 - categorical_accuracy: 1.0000
Epoch 00038: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 0.0019 - categorical_accuracy: 1.0000 - val_loss: 0.2634 - val_categorical_accuracy: 0.9489

Epoch 00039: LearningRateScheduler reducing learning rate to 0.00010058028439341503.
Epoch 39/40
95/96 [============================>.] - ETA: 0s - loss: 9.0082e-04 - categorical_accuracy: 1.0000
Epoch 00039: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 8.9252e-04 - categorical_accuracy: 1.0000 - val_loss: 0.2808 - val_categorical_accuracy: 0.9489

Epoch 00040: LearningRateScheduler reducing learning rate to 0.00010046422751473202.
Epoch 40/40
95/96 [============================>.] - ETA: 0s - loss: 7.6470e-04 - categorical_accuracy: 1.0000
Epoch 00040: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 37s 384ms/step - loss: 7.5734e-04 - categorical_accuracy: 1.0000 - val_loss: 0.2864 - val_categorical_accuracy: 0.9489
record:  0.24625521984206797 0.95255476
Start inference on test dataset.
114/114 [==============================] - 12s 109ms/step
Finitune VGG16...
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
vgg16 (Model)                (None, 12, 12, 512)       14714688  
_________________________________________________________________
global_max_pooling2d (Global (None, 512)               0         
_________________________________________________________________
dense (Dense)                (None, 4)                 2052      
=================================================================
Total params: 14,716,740
Trainable params: 14,716,740
Non-trainable params: 0
_________________________________________________________________
Train for 96 steps, validate for 18 steps

Epoch 00001: LearningRateScheduler reducing learning rate to 0.0001.
Epoch 1/40
95/96 [============================>.] - ETA: 0s - loss: 1.0511 - categorical_accuracy: 0.5421
Epoch 00001: val_categorical_accuracy improved from -inf to 0.83577, saving model to ../output/best_models/finetuned_VGG16_nobg.h5
96/96 [==============================] - 49s 506ms/step - loss: 1.0456 - categorical_accuracy: 0.5456 - val_loss: 0.5284 - val_categorical_accuracy: 0.8358

Epoch 00002: LearningRateScheduler reducing learning rate to 0.00017500000000000003.
Epoch 2/40
95/96 [============================>.] - ETA: 0s - loss: 0.4496 - categorical_accuracy: 0.8750
Epoch 00002: val_categorical_accuracy improved from 0.83577 to 0.94161, saving model to ../output/best_models/finetuned_VGG16_nobg.h5
96/96 [==============================] - 48s 502ms/step - loss: 0.4507 - categorical_accuracy: 0.8743 - val_loss: 0.2104 - val_categorical_accuracy: 0.9416

Epoch 00003: LearningRateScheduler reducing learning rate to 0.00025.
Epoch 3/40
95/96 [============================>.] - ETA: 0s - loss: 0.3823 - categorical_accuracy: 0.8895
Epoch 00003: val_categorical_accuracy did not improve from 0.94161
96/96 [==============================] - 48s 496ms/step - loss: 0.3806 - categorical_accuracy: 0.8900 - val_loss: 0.2461 - val_categorical_accuracy: 0.9270

Epoch 00004: LearningRateScheduler reducing learning rate to 0.00032500000000000004.
Epoch 4/40
95/96 [============================>.] - ETA: 0s - loss: 0.4253 - categorical_accuracy: 0.8730
Epoch 00004: val_categorical_accuracy did not improve from 0.94161
96/96 [==============================] - 48s 495ms/step - loss: 0.4246 - categorical_accuracy: 0.8724 - val_loss: 0.2369 - val_categorical_accuracy: 0.9270

Epoch 00005: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 5/40
95/96 [============================>.] - ETA: 0s - loss: 0.3757 - categorical_accuracy: 0.8921
Epoch 00005: val_categorical_accuracy improved from 0.94161 to 0.94526, saving model to ../output/best_models/finetuned_VGG16_nobg.h5
96/96 [==============================] - 48s 500ms/step - loss: 0.3764 - categorical_accuracy: 0.8919 - val_loss: 0.2270 - val_categorical_accuracy: 0.9453

Epoch 00006: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 6/40
95/96 [============================>.] - ETA: 0s - loss: 0.3153 - categorical_accuracy: 0.9079
Epoch 00006: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 494ms/step - loss: 0.3129 - categorical_accuracy: 0.9089 - val_loss: 0.3133 - val_categorical_accuracy: 0.8832

Epoch 00007: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 7/40
95/96 [============================>.] - ETA: 0s - loss: 0.3119 - categorical_accuracy: 0.9112
Epoch 00007: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 495ms/step - loss: 0.3139 - categorical_accuracy: 0.9089 - val_loss: 0.5406 - val_categorical_accuracy: 0.7956

Epoch 00008: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 8/40
95/96 [============================>.] - ETA: 0s - loss: 0.3127 - categorical_accuracy: 0.8987
Epoch 00008: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.3116 - categorical_accuracy: 0.8984 - val_loss: 0.3669 - val_categorical_accuracy: 0.8832

Epoch 00009: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 9/40
95/96 [============================>.] - ETA: 0s - loss: 0.2900 - categorical_accuracy: 0.9105
Epoch 00009: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.2884 - categorical_accuracy: 0.9108 - val_loss: 0.2833 - val_categorical_accuracy: 0.9124

Epoch 00010: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 10/40
95/96 [============================>.] - ETA: 0s - loss: 0.2939 - categorical_accuracy: 0.9079
Epoch 00010: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.2963 - categorical_accuracy: 0.9069 - val_loss: 0.3695 - val_categorical_accuracy: 0.8759

Epoch 00011: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 11/40
95/96 [============================>.] - ETA: 0s - loss: 0.2542 - categorical_accuracy: 0.9237
Epoch 00011: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.2591 - categorical_accuracy: 0.9225 - val_loss: 0.2813 - val_categorical_accuracy: 0.8942

Epoch 00012: LearningRateScheduler reducing learning rate to 0.00034.
Epoch 12/40
95/96 [============================>.] - ETA: 0s - loss: 0.2149 - categorical_accuracy: 0.9349
Epoch 00012: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.2167 - categorical_accuracy: 0.9342 - val_loss: 0.2848 - val_categorical_accuracy: 0.9161

Epoch 00013: LearningRateScheduler reducing learning rate to 0.00029200000000000005.
Epoch 13/40
95/96 [============================>.] - ETA: 0s - loss: 0.2108 - categorical_accuracy: 0.9316
Epoch 00013: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.2104 - categorical_accuracy: 0.9316 - val_loss: 0.1995 - val_categorical_accuracy: 0.9453

Epoch 00014: LearningRateScheduler reducing learning rate to 0.00025360000000000004.
Epoch 14/40
95/96 [============================>.] - ETA: 0s - loss: 0.1855 - categorical_accuracy: 0.9408
Epoch 00014: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.1840 - categorical_accuracy: 0.9414 - val_loss: 0.2088 - val_categorical_accuracy: 0.9453

Epoch 00015: LearningRateScheduler reducing learning rate to 0.00022288000000000006.
Epoch 15/40
95/96 [============================>.] - ETA: 0s - loss: 0.1871 - categorical_accuracy: 0.9368
Epoch 00015: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.1863 - categorical_accuracy: 0.9375 - val_loss: 0.2464 - val_categorical_accuracy: 0.9416

Epoch 00016: LearningRateScheduler reducing learning rate to 0.00019830400000000006.
Epoch 16/40
95/96 [============================>.] - ETA: 0s - loss: 0.1544 - categorical_accuracy: 0.9526
Epoch 00016: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.1532 - categorical_accuracy: 0.9531 - val_loss: 0.2000 - val_categorical_accuracy: 0.9380

Epoch 00017: LearningRateScheduler reducing learning rate to 0.00017864320000000004.
Epoch 17/40
95/96 [============================>.] - ETA: 0s - loss: 0.1595 - categorical_accuracy: 0.9461
Epoch 00017: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.1604 - categorical_accuracy: 0.9460 - val_loss: 0.2767 - val_categorical_accuracy: 0.9197

Epoch 00018: LearningRateScheduler reducing learning rate to 0.00016291456000000005.
Epoch 18/40
95/96 [============================>.] - ETA: 0s - loss: 0.1482 - categorical_accuracy: 0.9526
Epoch 00018: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 47s 493ms/step - loss: 0.1486 - categorical_accuracy: 0.9525 - val_loss: 0.2521 - val_categorical_accuracy: 0.9380

Epoch 00019: LearningRateScheduler reducing learning rate to 0.00015033164800000003.
Epoch 19/40
95/96 [============================>.] - ETA: 0s - loss: 0.1094 - categorical_accuracy: 0.9632
Epoch 00019: val_categorical_accuracy improved from 0.94526 to 0.94891, saving model to ../output/best_models/finetuned_VGG16_nobg.h5
96/96 [==============================] - 48s 498ms/step - loss: 0.1097 - categorical_accuracy: 0.9629 - val_loss: 0.2152 - val_categorical_accuracy: 0.9489

Epoch 00020: LearningRateScheduler reducing learning rate to 0.00014026531840000004.
Epoch 20/40
95/96 [============================>.] - ETA: 0s - loss: 0.1105 - categorical_accuracy: 0.9651
Epoch 00020: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.1111 - categorical_accuracy: 0.9648 - val_loss: 0.2590 - val_categorical_accuracy: 0.9307

Epoch 00021: LearningRateScheduler reducing learning rate to 0.00013221225472000002.
Epoch 21/40
95/96 [============================>.] - ETA: 0s - loss: 0.1089 - categorical_accuracy: 0.9664
Epoch 00021: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.1103 - categorical_accuracy: 0.9661 - val_loss: 0.2880 - val_categorical_accuracy: 0.9343

Epoch 00022: LearningRateScheduler reducing learning rate to 0.00012576980377600002.
Epoch 22/40
95/96 [============================>.] - ETA: 0s - loss: 0.0875 - categorical_accuracy: 0.9684
Epoch 00022: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.0873 - categorical_accuracy: 0.9688 - val_loss: 0.3518 - val_categorical_accuracy: 0.9307

Epoch 00023: LearningRateScheduler reducing learning rate to 0.00012061584302080001.
Epoch 23/40
95/96 [============================>.] - ETA: 0s - loss: 0.0881 - categorical_accuracy: 0.9697
Epoch 00023: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.0876 - categorical_accuracy: 0.9701 - val_loss: 0.2937 - val_categorical_accuracy: 0.9343

Epoch 00024: LearningRateScheduler reducing learning rate to 0.00011649267441664002.
Epoch 24/40
95/96 [============================>.] - ETA: 0s - loss: 0.0766 - categorical_accuracy: 0.9743
Epoch 00024: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.0759 - categorical_accuracy: 0.9746 - val_loss: 0.2458 - val_categorical_accuracy: 0.9416

Epoch 00025: LearningRateScheduler reducing learning rate to 0.00011319413953331202.
Epoch 25/40
95/96 [============================>.] - ETA: 0s - loss: 0.0791 - categorical_accuracy: 0.9717
Epoch 00025: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.0783 - categorical_accuracy: 0.9720 - val_loss: 0.3227 - val_categorical_accuracy: 0.9453

Epoch 00026: LearningRateScheduler reducing learning rate to 0.00011055531162664962.
Epoch 26/40
95/96 [============================>.] - ETA: 0s - loss: 0.0563 - categorical_accuracy: 0.9829
Epoch 00026: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.0559 - categorical_accuracy: 0.9831 - val_loss: 0.3253 - val_categorical_accuracy: 0.9343

Epoch 00027: LearningRateScheduler reducing learning rate to 0.0001084442493013197.
Epoch 27/40
95/96 [============================>.] - ETA: 0s - loss: 0.0591 - categorical_accuracy: 0.9816
Epoch 00027: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.0587 - categorical_accuracy: 0.9818 - val_loss: 0.3884 - val_categorical_accuracy: 0.9234

Epoch 00028: LearningRateScheduler reducing learning rate to 0.00010675539944105576.
Epoch 28/40
95/96 [============================>.] - ETA: 0s - loss: 0.0591 - categorical_accuracy: 0.9836
Epoch 00028: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.0586 - categorical_accuracy: 0.9837 - val_loss: 0.2653 - val_categorical_accuracy: 0.9380

Epoch 00029: LearningRateScheduler reducing learning rate to 0.0001054043195528446.
Epoch 29/40
95/96 [============================>.] - ETA: 0s - loss: 0.0609 - categorical_accuracy: 0.9809
Epoch 00029: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.0610 - categorical_accuracy: 0.9811 - val_loss: 0.2942 - val_categorical_accuracy: 0.9380

Epoch 00030: LearningRateScheduler reducing learning rate to 0.00010432345564227568.
Epoch 30/40
95/96 [============================>.] - ETA: 0s - loss: 0.0483 - categorical_accuracy: 0.9862
Epoch 00030: val_categorical_accuracy did not improve from 0.94891
96/96 [==============================] - 47s 493ms/step - loss: 0.0478 - categorical_accuracy: 0.9863 - val_loss: 0.4523 - val_categorical_accuracy: 0.9416

Epoch 00031: LearningRateScheduler reducing learning rate to 0.00010345876451382055.
Epoch 31/40
95/96 [============================>.] - ETA: 0s - loss: 0.0233 - categorical_accuracy: 0.9941
Epoch 00031: val_categorical_accuracy improved from 0.94891 to 0.95255, saving model to ../output/best_models/finetuned_VGG16_nobg.h5
96/96 [==============================] - 48s 498ms/step - loss: 0.0231 - categorical_accuracy: 0.9941 - val_loss: 0.3113 - val_categorical_accuracy: 0.9526

Epoch 00032: LearningRateScheduler reducing learning rate to 0.00010276701161105644.
Epoch 32/40
95/96 [============================>.] - ETA: 0s - loss: 0.0522 - categorical_accuracy: 0.9809
Epoch 00032: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 47s 493ms/step - loss: 0.0527 - categorical_accuracy: 0.9805 - val_loss: 0.5279 - val_categorical_accuracy: 0.9343

Epoch 00033: LearningRateScheduler reducing learning rate to 0.00010221360928884516.
Epoch 33/40
95/96 [============================>.] - ETA: 0s - loss: 0.0941 - categorical_accuracy: 0.9678
Epoch 00033: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 47s 493ms/step - loss: 0.0935 - categorical_accuracy: 0.9681 - val_loss: 0.2787 - val_categorical_accuracy: 0.9124

Epoch 00034: LearningRateScheduler reducing learning rate to 0.00010177088743107613.
Epoch 34/40
95/96 [============================>.] - ETA: 0s - loss: 0.0551 - categorical_accuracy: 0.9809
Epoch 00034: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 47s 493ms/step - loss: 0.0549 - categorical_accuracy: 0.9811 - val_loss: 0.3111 - val_categorical_accuracy: 0.9380

Epoch 00035: LearningRateScheduler reducing learning rate to 0.0001014167099448609.
Epoch 35/40
95/96 [============================>.] - ETA: 0s - loss: 0.0475 - categorical_accuracy: 0.9836
Epoch 00035: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 47s 493ms/step - loss: 0.0470 - categorical_accuracy: 0.9837 - val_loss: 0.3337 - val_categorical_accuracy: 0.9453

Epoch 00036: LearningRateScheduler reducing learning rate to 0.00010113336795588872.
Epoch 36/40
95/96 [============================>.] - ETA: 0s - loss: 0.0173 - categorical_accuracy: 0.9947
Epoch 00036: val_categorical_accuracy did not improve from 0.95255
96/96 [==============================] - 47s 493ms/step - loss: 0.0172 - categorical_accuracy: 0.9948 - val_loss: 0.4181 - val_categorical_accuracy: 0.9380

Epoch 00037: LearningRateScheduler reducing learning rate to 0.00010090669436471098.
Epoch 37/40
95/96 [============================>.] - ETA: 0s - loss: 0.0235 - categorical_accuracy: 0.9921
Epoch 00037: val_categorical_accuracy improved from 0.95255 to 0.95620, saving model to ../output/best_models/finetuned_VGG16_nobg.h5
96/96 [==============================] - 48s 498ms/step - loss: 0.0233 - categorical_accuracy: 0.9922 - val_loss: 0.4309 - val_categorical_accuracy: 0.9562

Epoch 00038: LearningRateScheduler reducing learning rate to 0.00010072535549176879.
Epoch 38/40
95/96 [============================>.] - ETA: 0s - loss: 0.0637 - categorical_accuracy: 0.9770
Epoch 00038: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 47s 493ms/step - loss: 0.0681 - categorical_accuracy: 0.9759 - val_loss: 0.4029 - val_categorical_accuracy: 0.9307

Epoch 00039: LearningRateScheduler reducing learning rate to 0.00010058028439341503.
Epoch 39/40
95/96 [============================>.] - ETA: 0s - loss: 0.0349 - categorical_accuracy: 0.9908
Epoch 00039: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 47s 493ms/step - loss: 0.0346 - categorical_accuracy: 0.9909 - val_loss: 0.3716 - val_categorical_accuracy: 0.9453

Epoch 00040: LearningRateScheduler reducing learning rate to 0.00010046422751473202.
Epoch 40/40
95/96 [============================>.] - ETA: 0s - loss: 0.0237 - categorical_accuracy: 0.9934
Epoch 00040: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 47s 493ms/step - loss: 0.0235 - categorical_accuracy: 0.9935 - val_loss: 0.4068 - val_categorical_accuracy: 0.9453
record:  0.4309098637733971 0.95620435
Start inference on test dataset.
114/114 [==============================] - 16s 141ms/step
Finitune InceptionResNetV2...
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
inception_resnet_v2 (Model)  (None, 11, 11, 1536)      54336736  
_________________________________________________________________
global_max_pooling2d (Global (None, 1536)              0         
_________________________________________________________________
dense (Dense)                (None, 4)                 6148      
=================================================================
Total params: 54,342,884
Trainable params: 54,282,340
Non-trainable params: 60,544
_________________________________________________________________
Train for 96 steps, validate for 18 steps

Epoch 00001: LearningRateScheduler reducing learning rate to 0.0001.
Epoch 1/40
95/96 [============================>.] - ETA: 0s - loss: 0.9021 - categorical_accuracy: 0.7454
Epoch 00001: val_categorical_accuracy improved from -inf to 0.81022, saving model to ../output/best_models/finetuned_InceptionResNetV2_nobg.h5
96/96 [==============================] - 57s 597ms/step - loss: 0.8955 - categorical_accuracy: 0.7467 - val_loss: 0.6350 - val_categorical_accuracy: 0.8102

Epoch 00002: LearningRateScheduler reducing learning rate to 0.00017500000000000003.
Epoch 2/40
95/96 [============================>.] - ETA: 0s - loss: 0.4069 - categorical_accuracy: 0.8757
Epoch 00002: val_categorical_accuracy improved from 0.81022 to 0.86131, saving model to ../output/best_models/finetuned_InceptionResNetV2_nobg.h5
96/96 [==============================] - 45s 470ms/step - loss: 0.4086 - categorical_accuracy: 0.8750 - val_loss: 0.4750 - val_categorical_accuracy: 0.8613

Epoch 00003: LearningRateScheduler reducing learning rate to 0.00025.
Epoch 3/40
95/96 [============================>.] - ETA: 0s - loss: 0.3733 - categorical_accuracy: 0.8908
Epoch 00003: val_categorical_accuracy did not improve from 0.86131
96/96 [==============================] - 43s 446ms/step - loss: 0.3698 - categorical_accuracy: 0.8919 - val_loss: 0.8911 - val_categorical_accuracy: 0.7847

Epoch 00004: LearningRateScheduler reducing learning rate to 0.00032500000000000004.
Epoch 4/40
95/96 [============================>.] - ETA: 0s - loss: 0.3909 - categorical_accuracy: 0.8809
Epoch 00004: val_categorical_accuracy improved from 0.86131 to 0.91241, saving model to ../output/best_models/finetuned_InceptionResNetV2_nobg.h5
96/96 [==============================] - 45s 471ms/step - loss: 0.3880 - categorical_accuracy: 0.8815 - val_loss: 0.4405 - val_categorical_accuracy: 0.9124

Epoch 00005: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 5/40
95/96 [============================>.] - ETA: 0s - loss: 0.2551 - categorical_accuracy: 0.9257
Epoch 00005: val_categorical_accuracy improved from 0.91241 to 0.93431, saving model to ../output/best_models/finetuned_InceptionResNetV2_nobg.h5
96/96 [==============================] - 45s 472ms/step - loss: 0.2525 - categorical_accuracy: 0.9264 - val_loss: 0.3295 - val_categorical_accuracy: 0.9343

Epoch 00006: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 6/40
95/96 [============================>.] - ETA: 0s - loss: 0.1618 - categorical_accuracy: 0.9467
Epoch 00006: val_categorical_accuracy did not improve from 0.93431
96/96 [==============================] - 43s 446ms/step - loss: 0.1603 - categorical_accuracy: 0.9473 - val_loss: 0.2847 - val_categorical_accuracy: 0.9197

Epoch 00007: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 7/40
95/96 [============================>.] - ETA: 0s - loss: 0.1459 - categorical_accuracy: 0.9572
Epoch 00007: val_categorical_accuracy improved from 0.93431 to 0.95620, saving model to ../output/best_models/finetuned_InceptionResNetV2_nobg.h5
96/96 [==============================] - 45s 471ms/step - loss: 0.1445 - categorical_accuracy: 0.9577 - val_loss: 0.1884 - val_categorical_accuracy: 0.9562

Epoch 00008: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 8/40
95/96 [============================>.] - ETA: 0s - loss: 0.1135 - categorical_accuracy: 0.9645
Epoch 00008: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.1133 - categorical_accuracy: 0.9648 - val_loss: 0.2382 - val_categorical_accuracy: 0.9416

Epoch 00009: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 9/40
95/96 [============================>.] - ETA: 0s - loss: 0.1460 - categorical_accuracy: 0.9559
Epoch 00009: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.1453 - categorical_accuracy: 0.9564 - val_loss: 0.2329 - val_categorical_accuracy: 0.9234

Epoch 00010: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 10/40
95/96 [============================>.] - ETA: 0s - loss: 0.1206 - categorical_accuracy: 0.9605
Epoch 00010: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.1197 - categorical_accuracy: 0.9609 - val_loss: 0.2131 - val_categorical_accuracy: 0.9416

Epoch 00011: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 11/40
95/96 [============================>.] - ETA: 0s - loss: 0.0662 - categorical_accuracy: 0.9789
Epoch 00011: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0656 - categorical_accuracy: 0.9792 - val_loss: 0.2054 - val_categorical_accuracy: 0.9416

Epoch 00012: LearningRateScheduler reducing learning rate to 0.00034.
Epoch 12/40
95/96 [============================>.] - ETA: 0s - loss: 0.0666 - categorical_accuracy: 0.9763
Epoch 00012: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0673 - categorical_accuracy: 0.9759 - val_loss: 0.2336 - val_categorical_accuracy: 0.9380

Epoch 00013: LearningRateScheduler reducing learning rate to 0.00029200000000000005.
Epoch 13/40
95/96 [============================>.] - ETA: 0s - loss: 0.0412 - categorical_accuracy: 0.9862
Epoch 00013: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 449ms/step - loss: 0.0425 - categorical_accuracy: 0.9857 - val_loss: 0.2479 - val_categorical_accuracy: 0.9343

Epoch 00014: LearningRateScheduler reducing learning rate to 0.00025360000000000004.
Epoch 14/40
95/96 [============================>.] - ETA: 0s - loss: 0.0169 - categorical_accuracy: 0.9980
Epoch 00014: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0168 - categorical_accuracy: 0.9980 - val_loss: 0.1748 - val_categorical_accuracy: 0.9380

Epoch 00015: LearningRateScheduler reducing learning rate to 0.00022288000000000006.
Epoch 15/40
95/96 [============================>.] - ETA: 0s - loss: 0.0136 - categorical_accuracy: 0.9980
Epoch 00015: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 446ms/step - loss: 0.0134 - categorical_accuracy: 0.9980 - val_loss: 0.2183 - val_categorical_accuracy: 0.9380

Epoch 00016: LearningRateScheduler reducing learning rate to 0.00019830400000000006.
Epoch 16/40
95/96 [============================>.] - ETA: 0s - loss: 0.0108 - categorical_accuracy: 0.9980
Epoch 00016: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 446ms/step - loss: 0.0107 - categorical_accuracy: 0.9980 - val_loss: 0.2115 - val_categorical_accuracy: 0.9526

Epoch 00017: LearningRateScheduler reducing learning rate to 0.00017864320000000004.
Epoch 17/40
95/96 [============================>.] - ETA: 0s - loss: 0.0102 - categorical_accuracy: 0.9987
Epoch 00017: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0101 - categorical_accuracy: 0.9987 - val_loss: 0.2228 - val_categorical_accuracy: 0.9416

Epoch 00018: LearningRateScheduler reducing learning rate to 0.00016291456000000005.
Epoch 18/40
95/96 [============================>.] - ETA: 0s - loss: 0.0081 - categorical_accuracy: 0.9980
Epoch 00018: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0080 - categorical_accuracy: 0.9980 - val_loss: 0.2341 - val_categorical_accuracy: 0.9526

Epoch 00019: LearningRateScheduler reducing learning rate to 0.00015033164800000003.
Epoch 19/40
95/96 [============================>.] - ETA: 0s - loss: 0.0038 - categorical_accuracy: 0.9987
Epoch 00019: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0038 - categorical_accuracy: 0.9987 - val_loss: 0.2343 - val_categorical_accuracy: 0.9489

Epoch 00020: LearningRateScheduler reducing learning rate to 0.00014026531840000004.
Epoch 20/40
95/96 [============================>.] - ETA: 0s - loss: 0.0031 - categorical_accuracy: 0.9993
Epoch 00020: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 446ms/step - loss: 0.0030 - categorical_accuracy: 0.9993 - val_loss: 0.2962 - val_categorical_accuracy: 0.9453

Epoch 00021: LearningRateScheduler reducing learning rate to 0.00013221225472000002.
Epoch 21/40
95/96 [============================>.] - ETA: 0s - loss: 0.0033 - categorical_accuracy: 0.9993
Epoch 00021: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0033 - categorical_accuracy: 0.9993 - val_loss: 0.2506 - val_categorical_accuracy: 0.9416

Epoch 00022: LearningRateScheduler reducing learning rate to 0.00012576980377600002.
Epoch 22/40
95/96 [============================>.] - ETA: 0s - loss: 0.0037 - categorical_accuracy: 0.9993
Epoch 00022: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0037 - categorical_accuracy: 0.9993 - val_loss: 0.2665 - val_categorical_accuracy: 0.9489

Epoch 00023: LearningRateScheduler reducing learning rate to 0.00012061584302080001.
Epoch 23/40
95/96 [============================>.] - ETA: 0s - loss: 0.0041 - categorical_accuracy: 0.9993
Epoch 00023: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0041 - categorical_accuracy: 0.9993 - val_loss: 0.2375 - val_categorical_accuracy: 0.9343

Epoch 00024: LearningRateScheduler reducing learning rate to 0.00011649267441664002.
Epoch 24/40
95/96 [============================>.] - ETA: 0s - loss: 0.0031 - categorical_accuracy: 0.9987
Epoch 00024: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 446ms/step - loss: 0.0031 - categorical_accuracy: 0.9987 - val_loss: 0.2379 - val_categorical_accuracy: 0.9416

Epoch 00025: LearningRateScheduler reducing learning rate to 0.00011319413953331202.
Epoch 25/40
95/96 [============================>.] - ETA: 0s - loss: 0.0081 - categorical_accuracy: 0.9987
Epoch 00025: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 446ms/step - loss: 0.0080 - categorical_accuracy: 0.9987 - val_loss: 0.1953 - val_categorical_accuracy: 0.9526

Epoch 00026: LearningRateScheduler reducing learning rate to 0.00011055531162664962.
Epoch 26/40
95/96 [============================>.] - ETA: 0s - loss: 0.0013 - categorical_accuracy: 1.0000
Epoch 00026: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0013 - categorical_accuracy: 1.0000 - val_loss: 0.2069 - val_categorical_accuracy: 0.9526

Epoch 00027: LearningRateScheduler reducing learning rate to 0.0001084442493013197.
Epoch 27/40
95/96 [============================>.] - ETA: 0s - loss: 0.0021 - categorical_accuracy: 0.9993
Epoch 00027: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0021 - categorical_accuracy: 0.9993 - val_loss: 0.2166 - val_categorical_accuracy: 0.9453

Epoch 00028: LearningRateScheduler reducing learning rate to 0.00010675539944105576.
Epoch 28/40
95/96 [============================>.] - ETA: 0s - loss: 0.0016 - categorical_accuracy: 0.9993
Epoch 00028: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 446ms/step - loss: 0.0015 - categorical_accuracy: 0.9993 - val_loss: 0.2196 - val_categorical_accuracy: 0.9416

Epoch 00029: LearningRateScheduler reducing learning rate to 0.0001054043195528446.
Epoch 29/40
95/96 [============================>.] - ETA: 0s - loss: 0.0090 - categorical_accuracy: 0.9974
Epoch 00029: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0089 - categorical_accuracy: 0.9974 - val_loss: 0.2622 - val_categorical_accuracy: 0.9453

Epoch 00030: LearningRateScheduler reducing learning rate to 0.00010432345564227568.
Epoch 30/40
95/96 [============================>.] - ETA: 0s - loss: 0.0090 - categorical_accuracy: 0.9993
Epoch 00030: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0089 - categorical_accuracy: 0.9993 - val_loss: 0.2559 - val_categorical_accuracy: 0.9416

Epoch 00031: LearningRateScheduler reducing learning rate to 0.00010345876451382055.
Epoch 31/40
95/96 [============================>.] - ETA: 0s - loss: 0.0011 - categorical_accuracy: 1.0000
Epoch 00031: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 446ms/step - loss: 0.0011 - categorical_accuracy: 1.0000 - val_loss: 0.2606 - val_categorical_accuracy: 0.9343

Epoch 00032: LearningRateScheduler reducing learning rate to 0.00010276701161105644.
Epoch 32/40
95/96 [============================>.] - ETA: 0s - loss: 0.0084 - categorical_accuracy: 0.9980
Epoch 00032: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 449ms/step - loss: 0.0083 - categorical_accuracy: 0.9980 - val_loss: 0.2422 - val_categorical_accuracy: 0.9489

Epoch 00033: LearningRateScheduler reducing learning rate to 0.00010221360928884516.
Epoch 33/40
95/96 [============================>.] - ETA: 0s - loss: 0.0013 - categorical_accuracy: 1.0000
Epoch 00033: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0013 - categorical_accuracy: 1.0000 - val_loss: 0.2156 - val_categorical_accuracy: 0.9526

Epoch 00034: LearningRateScheduler reducing learning rate to 0.00010177088743107613.
Epoch 34/40
95/96 [============================>.] - ETA: 0s - loss: 0.0027 - categorical_accuracy: 0.9993
Epoch 00034: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 446ms/step - loss: 0.0027 - categorical_accuracy: 0.9993 - val_loss: 0.2507 - val_categorical_accuracy: 0.9380

Epoch 00035: LearningRateScheduler reducing learning rate to 0.0001014167099448609.
Epoch 35/40
95/96 [============================>.] - ETA: 0s - loss: 4.7122e-04 - categorical_accuracy: 1.0000
Epoch 00035: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0045 - categorical_accuracy: 0.9993 - val_loss: 0.2429 - val_categorical_accuracy: 0.9526

Epoch 00036: LearningRateScheduler reducing learning rate to 0.00010113336795588872.
Epoch 36/40
95/96 [============================>.] - ETA: 0s - loss: 0.0012 - categorical_accuracy: 0.9993
Epoch 00036: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0012 - categorical_accuracy: 0.9993 - val_loss: 0.2511 - val_categorical_accuracy: 0.9562

Epoch 00037: LearningRateScheduler reducing learning rate to 0.00010090669436471098.
Epoch 37/40
95/96 [============================>.] - ETA: 0s - loss: 0.0012 - categorical_accuracy: 0.9993
Epoch 00037: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0011 - categorical_accuracy: 0.9993 - val_loss: 0.2082 - val_categorical_accuracy: 0.9526

Epoch 00038: LearningRateScheduler reducing learning rate to 0.00010072535549176879.
Epoch 38/40
95/96 [============================>.] - ETA: 0s - loss: 0.0222 - categorical_accuracy: 0.9967
Epoch 00038: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0220 - categorical_accuracy: 0.9967 - val_loss: 0.3155 - val_categorical_accuracy: 0.9380

Epoch 00039: LearningRateScheduler reducing learning rate to 0.00010058028439341503.
Epoch 39/40
95/96 [============================>.] - ETA: 0s - loss: 0.0037 - categorical_accuracy: 0.9987
Epoch 00039: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 447ms/step - loss: 0.0036 - categorical_accuracy: 0.9987 - val_loss: 0.2078 - val_categorical_accuracy: 0.9526

Epoch 00040: LearningRateScheduler reducing learning rate to 0.00010046422751473202.
Epoch 40/40
95/96 [============================>.] - ETA: 0s - loss: 8.5303e-04 - categorical_accuracy: 1.0000
Epoch 00040: val_categorical_accuracy did not improve from 0.95620
96/96 [==============================] - 43s 446ms/step - loss: 8.4477e-04 - categorical_accuracy: 1.0000 - val_loss: 0.2290 - val_categorical_accuracy: 0.9526
record:  0.18841718499768628 0.95620435
Start inference on test dataset.
114/114 [==============================] - 15s 134ms/step
Finitune MobileNetV2...
/home/lzhu68/miniconda3/envs/ml/lib/python3.6/site-packages/keras_applications/mobilenet_v2.py:294: UserWarning:

`input_shape` is undefined or non-square, or `rows` is not in [96, 128, 160, 192, 224]. Weights for input shape (224, 224) will be loaded as the default.

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
mobilenetv2_1.00_224 (Model) (None, 13, 13, 1280)      2257984   
_________________________________________________________________
global_max_pooling2d (Global (None, 1280)              0         
_________________________________________________________________
dense (Dense)                (None, 4)                 5124      
=================================================================
Total params: 2,263,108
Trainable params: 2,228,996
Non-trainable params: 34,112
_________________________________________________________________
Train for 96 steps, validate for 18 steps

Epoch 00001: LearningRateScheduler reducing learning rate to 0.0001.
Epoch 1/40
95/96 [============================>.] - ETA: 0s - loss: 1.2513 - categorical_accuracy: 0.5822
Epoch 00001: val_categorical_accuracy improved from -inf to 0.38686, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 22s 226ms/step - loss: 1.2443 - categorical_accuracy: 0.5846 - val_loss: 2.1578 - val_categorical_accuracy: 0.3869

Epoch 00002: LearningRateScheduler reducing learning rate to 0.00017500000000000003.
Epoch 2/40
95/96 [============================>.] - ETA: 0s - loss: 0.6886 - categorical_accuracy: 0.7803
Epoch 00002: val_categorical_accuracy improved from 0.38686 to 0.56934, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 19s 199ms/step - loss: 0.6854 - categorical_accuracy: 0.7812 - val_loss: 2.7338 - val_categorical_accuracy: 0.5693

Epoch 00003: LearningRateScheduler reducing learning rate to 0.00025.
Epoch 3/40
95/96 [============================>.] - ETA: 0s - loss: 0.4750 - categorical_accuracy: 0.8507
Epoch 00003: val_categorical_accuracy improved from 0.56934 to 0.61314, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 19s 201ms/step - loss: 0.4810 - categorical_accuracy: 0.8483 - val_loss: 2.5140 - val_categorical_accuracy: 0.6131

Epoch 00004: LearningRateScheduler reducing learning rate to 0.00032500000000000004.
Epoch 4/40
95/96 [============================>.] - ETA: 0s - loss: 0.5592 - categorical_accuracy: 0.8559
Epoch 00004: val_categorical_accuracy did not improve from 0.61314
96/96 [==============================] - 19s 198ms/step - loss: 0.5718 - categorical_accuracy: 0.8548 - val_loss: 1.9443 - val_categorical_accuracy: 0.5292

Epoch 00005: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 5/40
95/96 [============================>.] - ETA: 0s - loss: 0.5945 - categorical_accuracy: 0.8724
Epoch 00005: val_categorical_accuracy did not improve from 0.61314
96/96 [==============================] - 19s 193ms/step - loss: 0.5984 - categorical_accuracy: 0.8717 - val_loss: 2.5051 - val_categorical_accuracy: 0.5584

Epoch 00006: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 6/40
95/96 [============================>.] - ETA: 0s - loss: 0.5806 - categorical_accuracy: 0.8638
Epoch 00006: val_categorical_accuracy improved from 0.61314 to 0.67518, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 19s 199ms/step - loss: 0.5745 - categorical_accuracy: 0.8652 - val_loss: 1.9716 - val_categorical_accuracy: 0.6752

Epoch 00007: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 7/40
95/96 [============================>.] - ETA: 0s - loss: 0.5756 - categorical_accuracy: 0.8770
Epoch 00007: val_categorical_accuracy improved from 0.67518 to 0.68248, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 19s 200ms/step - loss: 0.5708 - categorical_accuracy: 0.8776 - val_loss: 2.0300 - val_categorical_accuracy: 0.6825

Epoch 00008: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 8/40
95/96 [============================>.] - ETA: 0s - loss: 0.3993 - categorical_accuracy: 0.9079
Epoch 00008: val_categorical_accuracy did not improve from 0.68248
96/96 [==============================] - 19s 198ms/step - loss: 0.3972 - categorical_accuracy: 0.9082 - val_loss: 2.7276 - val_categorical_accuracy: 0.6204

Epoch 00009: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 9/40
95/96 [============================>.] - ETA: 0s - loss: 0.2944 - categorical_accuracy: 0.9158
Epoch 00009: val_categorical_accuracy improved from 0.68248 to 0.77372, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 19s 197ms/step - loss: 0.2962 - categorical_accuracy: 0.9154 - val_loss: 1.3493 - val_categorical_accuracy: 0.7737

Epoch 00010: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 10/40
95/96 [============================>.] - ETA: 0s - loss: 0.3652 - categorical_accuracy: 0.9197
Epoch 00010: val_categorical_accuracy did not improve from 0.77372
96/96 [==============================] - 19s 195ms/step - loss: 0.3739 - categorical_accuracy: 0.9167 - val_loss: 4.4168 - val_categorical_accuracy: 0.4015

Epoch 00011: LearningRateScheduler reducing learning rate to 0.0004.
Epoch 11/40
95/96 [============================>.] - ETA: 0s - loss: 0.3238 - categorical_accuracy: 0.9270
Epoch 00011: val_categorical_accuracy improved from 0.77372 to 0.83212, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 19s 199ms/step - loss: 0.3205 - categorical_accuracy: 0.9277 - val_loss: 0.8768 - val_categorical_accuracy: 0.8321

Epoch 00012: LearningRateScheduler reducing learning rate to 0.00034.
Epoch 12/40
95/96 [============================>.] - ETA: 0s - loss: 0.2546 - categorical_accuracy: 0.9316
Epoch 00012: val_categorical_accuracy improved from 0.83212 to 0.91971, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 19s 199ms/step - loss: 0.2534 - categorical_accuracy: 0.9316 - val_loss: 0.4603 - val_categorical_accuracy: 0.9197

Epoch 00013: LearningRateScheduler reducing learning rate to 0.00029200000000000005.
Epoch 13/40
95/96 [============================>.] - ETA: 0s - loss: 0.1507 - categorical_accuracy: 0.9645
Epoch 00013: val_categorical_accuracy did not improve from 0.91971
96/96 [==============================] - 18s 192ms/step - loss: 0.1513 - categorical_accuracy: 0.9642 - val_loss: 1.0849 - val_categorical_accuracy: 0.7555

Epoch 00014: LearningRateScheduler reducing learning rate to 0.00025360000000000004.
Epoch 14/40
95/96 [============================>.] - ETA: 0s - loss: 0.1723 - categorical_accuracy: 0.9553
Epoch 00014: val_categorical_accuracy did not improve from 0.91971
96/96 [==============================] - 19s 197ms/step - loss: 0.1706 - categorical_accuracy: 0.9557 - val_loss: 0.4357 - val_categorical_accuracy: 0.8759

Epoch 00015: LearningRateScheduler reducing learning rate to 0.00022288000000000006.
Epoch 15/40
95/96 [============================>.] - ETA: 0s - loss: 0.1106 - categorical_accuracy: 0.9697
Epoch 00015: val_categorical_accuracy did not improve from 0.91971
96/96 [==============================] - 19s 197ms/step - loss: 0.1098 - categorical_accuracy: 0.9701 - val_loss: 0.5191 - val_categorical_accuracy: 0.8759

Epoch 00016: LearningRateScheduler reducing learning rate to 0.00019830400000000006.
Epoch 16/40
95/96 [============================>.] - ETA: 0s - loss: 0.1312 - categorical_accuracy: 0.9678
Epoch 00016: val_categorical_accuracy did not improve from 0.91971
96/96 [==============================] - 19s 196ms/step - loss: 0.1333 - categorical_accuracy: 0.9674 - val_loss: 0.5126 - val_categorical_accuracy: 0.8832

Epoch 00017: LearningRateScheduler reducing learning rate to 0.00017864320000000004.
Epoch 17/40
95/96 [============================>.] - ETA: 0s - loss: 0.0607 - categorical_accuracy: 0.9803
Epoch 00017: val_categorical_accuracy did not improve from 0.91971
96/96 [==============================] - 18s 192ms/step - loss: 0.0613 - categorical_accuracy: 0.9798 - val_loss: 0.4982 - val_categorical_accuracy: 0.8686

Epoch 00018: LearningRateScheduler reducing learning rate to 0.00016291456000000005.
Epoch 18/40
95/96 [============================>.] - ETA: 0s - loss: 0.0422 - categorical_accuracy: 0.9862
Epoch 00018: val_categorical_accuracy improved from 0.91971 to 0.93796, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 19s 200ms/step - loss: 0.0419 - categorical_accuracy: 0.9863 - val_loss: 0.4103 - val_categorical_accuracy: 0.9380

Epoch 00019: LearningRateScheduler reducing learning rate to 0.00015033164800000003.
Epoch 19/40
95/96 [============================>.] - ETA: 0s - loss: 0.0492 - categorical_accuracy: 0.9855
Epoch 00019: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 19s 197ms/step - loss: 0.0500 - categorical_accuracy: 0.9850 - val_loss: 0.3558 - val_categorical_accuracy: 0.9343

Epoch 00020: LearningRateScheduler reducing learning rate to 0.00014026531840000004.
Epoch 20/40
95/96 [============================>.] - ETA: 0s - loss: 0.0403 - categorical_accuracy: 0.9855
Epoch 00020: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 19s 196ms/step - loss: 0.0399 - categorical_accuracy: 0.9857 - val_loss: 0.3686 - val_categorical_accuracy: 0.9197

Epoch 00021: LearningRateScheduler reducing learning rate to 0.00013221225472000002.
Epoch 21/40
95/96 [============================>.] - ETA: 0s - loss: 0.0211 - categorical_accuracy: 0.9934
Epoch 00021: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 18s 191ms/step - loss: 0.0209 - categorical_accuracy: 0.9935 - val_loss: 0.3329 - val_categorical_accuracy: 0.9307

Epoch 00022: LearningRateScheduler reducing learning rate to 0.00012576980377600002.
Epoch 22/40
95/96 [============================>.] - ETA: 0s - loss: 0.0269 - categorical_accuracy: 0.9908
Epoch 00022: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 19s 196ms/step - loss: 0.0266 - categorical_accuracy: 0.9909 - val_loss: 0.4443 - val_categorical_accuracy: 0.9051

Epoch 00023: LearningRateScheduler reducing learning rate to 0.00012061584302080001.
Epoch 23/40
95/96 [============================>.] - ETA: 0s - loss: 0.0256 - categorical_accuracy: 0.9921
Epoch 00023: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 19s 196ms/step - loss: 0.0254 - categorical_accuracy: 0.9922 - val_loss: 0.3885 - val_categorical_accuracy: 0.9161

Epoch 00024: LearningRateScheduler reducing learning rate to 0.00011649267441664002.
Epoch 24/40
95/96 [============================>.] - ETA: 0s - loss: 0.0166 - categorical_accuracy: 0.9961
Epoch 00024: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 19s 197ms/step - loss: 0.0166 - categorical_accuracy: 0.9961 - val_loss: 0.3775 - val_categorical_accuracy: 0.9197

Epoch 00025: LearningRateScheduler reducing learning rate to 0.00011319413953331202.
Epoch 25/40
95/96 [============================>.] - ETA: 0s - loss: 0.0302 - categorical_accuracy: 0.9901
Epoch 00025: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 18s 191ms/step - loss: 0.0304 - categorical_accuracy: 0.9896 - val_loss: 0.4099 - val_categorical_accuracy: 0.9234

Epoch 00026: LearningRateScheduler reducing learning rate to 0.00011055531162664962.
Epoch 26/40
95/96 [============================>.] - ETA: 0s - loss: 0.0257 - categorical_accuracy: 0.9882
Epoch 00026: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 19s 197ms/step - loss: 0.0255 - categorical_accuracy: 0.9883 - val_loss: 0.4255 - val_categorical_accuracy: 0.9197

Epoch 00027: LearningRateScheduler reducing learning rate to 0.0001084442493013197.
Epoch 27/40
95/96 [============================>.] - ETA: 0s - loss: 0.0394 - categorical_accuracy: 0.9895
Epoch 00027: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 19s 197ms/step - loss: 0.0390 - categorical_accuracy: 0.9896 - val_loss: 0.3334 - val_categorical_accuracy: 0.9343

Epoch 00028: LearningRateScheduler reducing learning rate to 0.00010675539944105576.
Epoch 28/40
95/96 [============================>.] - ETA: 0s - loss: 0.0228 - categorical_accuracy: 0.9928
Epoch 00028: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 19s 197ms/step - loss: 0.0226 - categorical_accuracy: 0.9928 - val_loss: 0.4526 - val_categorical_accuracy: 0.9270

Epoch 00029: LearningRateScheduler reducing learning rate to 0.0001054043195528446.
Epoch 29/40
95/96 [============================>.] - ETA: 0s - loss: 0.0139 - categorical_accuracy: 0.9961
Epoch 00029: val_categorical_accuracy did not improve from 0.93796
96/96 [==============================] - 18s 191ms/step - loss: 0.0138 - categorical_accuracy: 0.9961 - val_loss: 0.3938 - val_categorical_accuracy: 0.9307

Epoch 00030: LearningRateScheduler reducing learning rate to 0.00010432345564227568.
Epoch 30/40
95/96 [============================>.] - ETA: 0s - loss: 0.0196 - categorical_accuracy: 0.9921
Epoch 00030: val_categorical_accuracy improved from 0.93796 to 0.94161, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 19s 201ms/step - loss: 0.0194 - categorical_accuracy: 0.9922 - val_loss: 0.4373 - val_categorical_accuracy: 0.9416

Epoch 00031: LearningRateScheduler reducing learning rate to 0.00010345876451382055.
Epoch 31/40
95/96 [============================>.] - ETA: 0s - loss: 0.0051 - categorical_accuracy: 1.0000
Epoch 00031: val_categorical_accuracy did not improve from 0.94161
96/96 [==============================] - 19s 196ms/step - loss: 0.0053 - categorical_accuracy: 1.0000 - val_loss: 0.3731 - val_categorical_accuracy: 0.9416

Epoch 00032: LearningRateScheduler reducing learning rate to 0.00010276701161105644.
Epoch 32/40
95/96 [============================>.] - ETA: 0s - loss: 0.0128 - categorical_accuracy: 0.9967
Epoch 00032: val_categorical_accuracy did not improve from 0.94161
96/96 [==============================] - 19s 196ms/step - loss: 0.0127 - categorical_accuracy: 0.9967 - val_loss: 0.3976 - val_categorical_accuracy: 0.9380

Epoch 00033: LearningRateScheduler reducing learning rate to 0.00010221360928884516.
Epoch 33/40
95/96 [============================>.] - ETA: 0s - loss: 0.0079 - categorical_accuracy: 0.9980
Epoch 00033: val_categorical_accuracy did not improve from 0.94161
96/96 [==============================] - 18s 191ms/step - loss: 0.0078 - categorical_accuracy: 0.9980 - val_loss: 0.4026 - val_categorical_accuracy: 0.9416

Epoch 00034: LearningRateScheduler reducing learning rate to 0.00010177088743107613.
Epoch 34/40
95/96 [============================>.] - ETA: 0s - loss: 0.0133 - categorical_accuracy: 0.9961
Epoch 00034: val_categorical_accuracy did not improve from 0.94161
96/96 [==============================] - 19s 197ms/step - loss: 0.0132 - categorical_accuracy: 0.9961 - val_loss: 0.3600 - val_categorical_accuracy: 0.9343

Epoch 00035: LearningRateScheduler reducing learning rate to 0.0001014167099448609.
Epoch 35/40
95/96 [============================>.] - ETA: 0s - loss: 0.0071 - categorical_accuracy: 0.9987
Epoch 00035: val_categorical_accuracy did not improve from 0.94161
96/96 [==============================] - 19s 197ms/step - loss: 0.0071 - categorical_accuracy: 0.9987 - val_loss: 0.3438 - val_categorical_accuracy: 0.9380

Epoch 00036: LearningRateScheduler reducing learning rate to 0.00010113336795588872.
Epoch 36/40
95/96 [============================>.] - ETA: 0s - loss: 0.0054 - categorical_accuracy: 0.9980
Epoch 00036: val_categorical_accuracy did not improve from 0.94161
96/96 [==============================] - 19s 197ms/step - loss: 0.0054 - categorical_accuracy: 0.9980 - val_loss: 0.3467 - val_categorical_accuracy: 0.9416

Epoch 00037: LearningRateScheduler reducing learning rate to 0.00010090669436471098.
Epoch 37/40
95/96 [============================>.] - ETA: 0s - loss: 0.0141 - categorical_accuracy: 0.9947
Epoch 00037: val_categorical_accuracy improved from 0.94161 to 0.94526, saving model to ../output/best_models/finetuned_MobileNetV2_nobg.h5
96/96 [==============================] - 19s 195ms/step - loss: 0.0139 - categorical_accuracy: 0.9948 - val_loss: 0.3822 - val_categorical_accuracy: 0.9453

Epoch 00038: LearningRateScheduler reducing learning rate to 0.00010072535549176879.
Epoch 38/40
95/96 [============================>.] - ETA: 0s - loss: 0.0205 - categorical_accuracy: 0.9914
Epoch 00038: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 19s 197ms/step - loss: 0.0203 - categorical_accuracy: 0.9915 - val_loss: 0.4185 - val_categorical_accuracy: 0.9307

Epoch 00039: LearningRateScheduler reducing learning rate to 0.00010058028439341503.
Epoch 39/40
95/96 [============================>.] - ETA: 0s - loss: 0.0372 - categorical_accuracy: 0.9908
Epoch 00039: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 19s 198ms/step - loss: 0.0368 - categorical_accuracy: 0.9909 - val_loss: 0.4273 - val_categorical_accuracy: 0.9234

Epoch 00040: LearningRateScheduler reducing learning rate to 0.00010046422751473202.
Epoch 40/40
95/96 [============================>.] - ETA: 0s - loss: 0.0259 - categorical_accuracy: 0.9921
Epoch 00040: val_categorical_accuracy did not improve from 0.94526
96/96 [==============================] - 19s 196ms/step - loss: 0.0270 - categorical_accuracy: 0.9915 - val_loss: 0.4964 - val_categorical_accuracy: 0.9161
record:  0.3822053930716215 0.94525546
Start inference on test dataset.
114/114 [==============================] - 4s 31ms/step
In [49]:
report_df =  pd.DataFrame(record_ls)

with pd.option_context('display.max_rows', None, 'display.max_columns', None): 
    display(report_df)
    
report_df.to_csv(os.path.join('../output',
                              f'fintune_cnn_nobg_report.csv'),
                 index=False)
model train_loss valid_loss train_acc valid_acc
0 finetuned_ResNet101V2_nobg 0.015460 0.246255 0.994792 0.952555
1 finetuned_VGG16_nobg 0.023319 0.430910 0.992188 0.956204
2 finetuned_InceptionResNetV2_nobg 0.144484 0.188417 0.957682 0.956204
3 finetuned_MobileNetV2_nobg 0.013917 0.382205 0.994792 0.945255

For all models, the accuracy is slightly decreased comapared to no background removal scenario. But it seems the overfitting problem is eased for InceptionResNetV2, since the training accuracy and validation accuracy are very close.

Ensembling

Ensembling involves the averaging of multiple prediction vectos to reduce errors and improve accuracy. Now, I will ensemble predictions from DenseNet and EfficientNet to (hopefully) produce better results.

In [20]:
ensemble_subs = ['finetune_MobileNetV2.csv',
                 'finetune_ResNet101V2.csv',
                 'finetune_InceptionResNetV2.csv']

sub = pd.read_csv(SUB_PATH)
In [26]:
final = 0
cnt  = len(ensemble_subs)

for sub_name in ensemble_subs:
    df = pd.read_csv(os.path.join(submission_dir, sub_name))
    prob = df.loc[:, 'healthy':].to_numpy()
#     display(df)
#     print(prob)
    final += prob
    
final = final / cnt
sub.loc[:, 'healthy':] = final

sub.to_csv(os.path.join(submission_dir, 'ensemble.csv'),
           index=False)